Jan 21 10:06:01 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 10:06:01 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:01 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:06:02 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 10:06:02 crc kubenswrapper[4684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:06:02 crc kubenswrapper[4684]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 10:06:02 crc kubenswrapper[4684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:06:02 crc kubenswrapper[4684]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:06:02 crc kubenswrapper[4684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 10:06:02 crc kubenswrapper[4684]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.352150 4684 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356299 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356327 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356333 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356338 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356344 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356349 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356355 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356378 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356384 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356392 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356399 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356405 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356412 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356418 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356424 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356429 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356434 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356443 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356448 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356452 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356456 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356460 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356465 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356469 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356473 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356477 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356481 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356486 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356491 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356495 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356500 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356505 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356511 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356516 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356521 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356527 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356531 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356536 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356543 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356549 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356556 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356561 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356566 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356572 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356576 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356581 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356587 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356591 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356595 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356600 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356604 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356608 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356613 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356619 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356623 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356627 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356631 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356635 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356639 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356643 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356651 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356655 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356659 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356663 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356669 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356673 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356677 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356682 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356686 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356690 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.356696 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357060 4684 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357076 4684 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357087 4684 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357101 4684 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357109 4684 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357114 4684 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357122 4684 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357128 4684 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357134 4684 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357140 4684 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357146 4684 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357151 4684 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357156 4684 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357162 4684 flags.go:64] FLAG: --cgroup-root="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357168 4684 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357173 4684 flags.go:64] FLAG: --client-ca-file="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357178 4684 flags.go:64] FLAG: --cloud-config="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357183 4684 flags.go:64] FLAG: --cloud-provider="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357189 4684 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357195 4684 flags.go:64] FLAG: --cluster-domain="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357200 4684 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357206 4684 flags.go:64] FLAG: --config-dir="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357212 4684 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357218 4684 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357226 4684 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357231 4684 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357237 4684 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357242 4684 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357248 4684 flags.go:64] FLAG: --contention-profiling="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357254 4684 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357259 4684 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357265 4684 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357270 4684 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357277 4684 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357282 4684 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357288 4684 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357293 4684 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357298 4684 flags.go:64] FLAG: --enable-server="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357304 4684 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357311 4684 flags.go:64] FLAG: --event-burst="100" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357317 4684 flags.go:64] FLAG: --event-qps="50" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357321 4684 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357327 4684 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357332 4684 flags.go:64] FLAG: --eviction-hard="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357339 4684 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357344 4684 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357351 4684 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357357 4684 flags.go:64] FLAG: --eviction-soft="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357378 4684 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357386 4684 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357392 4684 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357398 4684 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357403 4684 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357408 4684 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357414 4684 flags.go:64] FLAG: --feature-gates="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357421 4684 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357426 4684 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357432 4684 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357437 4684 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357442 4684 flags.go:64] FLAG: --healthz-port="10248" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357447 4684 flags.go:64] FLAG: --help="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357453 4684 flags.go:64] FLAG: --hostname-override="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357458 4684 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357463 4684 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357468 4684 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357473 4684 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357478 4684 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357483 4684 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357488 4684 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357494 4684 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357499 4684 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357505 4684 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357510 4684 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357516 4684 flags.go:64] FLAG: --kube-reserved="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357521 4684 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357525 4684 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357531 4684 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357535 4684 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357541 4684 flags.go:64] FLAG: --lock-file="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357546 4684 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357571 4684 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357577 4684 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357585 4684 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357590 4684 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357595 4684 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357600 4684 flags.go:64] FLAG: --logging-format="text" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357606 4684 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357611 4684 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357616 4684 flags.go:64] FLAG: --manifest-url="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357621 4684 flags.go:64] FLAG: --manifest-url-header="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357630 4684 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357635 4684 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357643 4684 flags.go:64] FLAG: --max-pods="110" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357648 4684 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357655 4684 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357660 4684 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357665 4684 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357671 4684 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357677 4684 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357684 4684 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357698 4684 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357706 4684 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357712 4684 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357718 4684 flags.go:64] FLAG: --pod-cidr="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357723 4684 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357733 4684 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357738 4684 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357743 4684 flags.go:64] FLAG: --pods-per-core="0" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357748 4684 flags.go:64] FLAG: --port="10250" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357753 4684 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357758 4684 flags.go:64] FLAG: --provider-id="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357766 4684 flags.go:64] FLAG: --qos-reserved="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357772 4684 flags.go:64] FLAG: --read-only-port="10255" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357777 4684 flags.go:64] FLAG: --register-node="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357782 4684 flags.go:64] FLAG: --register-schedulable="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357787 4684 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357797 4684 flags.go:64] FLAG: --registry-burst="10" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357802 4684 flags.go:64] FLAG: --registry-qps="5" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357807 4684 flags.go:64] FLAG: --reserved-cpus="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357812 4684 flags.go:64] FLAG: --reserved-memory="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357820 4684 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357825 4684 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357830 4684 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357835 4684 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357840 4684 flags.go:64] FLAG: --runonce="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357844 4684 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357850 4684 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357856 4684 flags.go:64] FLAG: --seccomp-default="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357861 4684 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357866 4684 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357872 4684 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357877 4684 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357883 4684 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357889 4684 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357895 4684 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357900 4684 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357905 4684 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357910 4684 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357915 4684 flags.go:64] FLAG: --system-cgroups="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357921 4684 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357930 4684 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357935 4684 flags.go:64] FLAG: --tls-cert-file="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357939 4684 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357945 4684 flags.go:64] FLAG: --tls-min-version="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357950 4684 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357955 4684 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357961 4684 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357967 4684 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357972 4684 flags.go:64] FLAG: --v="2" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357980 4684 flags.go:64] FLAG: --version="false" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357987 4684 flags.go:64] FLAG: --vmodule="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357993 4684 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.357998 4684 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358119 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358126 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358133 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358138 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358143 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358149 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358170 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358176 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358182 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358190 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358195 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358200 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358205 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358211 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358217 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358222 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358226 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358232 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358237 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358242 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358248 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358252 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358257 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358261 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358266 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358270 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358275 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358280 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358284 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358288 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358293 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358299 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358305 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358311 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358316 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358321 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358327 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358331 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358337 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358341 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358346 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358351 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358355 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358377 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358385 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358392 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358397 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358402 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358406 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358410 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358415 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358420 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358424 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358428 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358433 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358437 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358441 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358445 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358450 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358455 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358460 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358464 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358469 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358474 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358478 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358483 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358488 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358495 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358500 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358505 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.358510 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.358518 4684 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.368847 4684 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.368910 4684 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.368998 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369012 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369016 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369023 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369029 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369038 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369043 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369048 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369053 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369058 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369064 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369071 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369076 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369082 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369092 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369098 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369104 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369109 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369114 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369118 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369124 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369128 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369133 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369138 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369142 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369149 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369155 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369160 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369165 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369169 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369174 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369180 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369187 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369192 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369198 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369204 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369211 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369217 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369223 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369228 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369233 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369238 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369243 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369248 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369253 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369258 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369264 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369269 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369275 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369280 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369284 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369288 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369293 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369298 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369303 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369308 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369314 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369319 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369324 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369328 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369333 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369337 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369341 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369347 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369353 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369381 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369387 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369393 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369398 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369403 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.369408 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.369416 4684 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370556 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370626 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370634 4684 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370641 4684 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370647 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370653 4684 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370659 4684 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370665 4684 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370670 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370676 4684 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370681 4684 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370694 4684 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370708 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370717 4684 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370733 4684 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370739 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370746 4684 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370753 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370760 4684 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370769 4684 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370777 4684 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370785 4684 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370794 4684 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370802 4684 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370810 4684 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370817 4684 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370831 4684 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370838 4684 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370845 4684 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370852 4684 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370859 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370866 4684 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370874 4684 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370882 4684 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370887 4684 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370895 4684 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370903 4684 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370910 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370920 4684 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370926 4684 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370932 4684 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370937 4684 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370943 4684 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370950 4684 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370956 4684 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370962 4684 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370967 4684 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370973 4684 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370979 4684 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370984 4684 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.370990 4684 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371001 4684 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371007 4684 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371013 4684 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371019 4684 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371024 4684 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371030 4684 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371036 4684 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371041 4684 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371047 4684 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371053 4684 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371060 4684 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371067 4684 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371074 4684 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371089 4684 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371095 4684 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371100 4684 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371106 4684 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371111 4684 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371127 4684 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.371133 4684 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.371147 4684 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.371565 4684 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.377332 4684 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.377541 4684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.378445 4684 server.go:997] "Starting client certificate rotation" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.378488 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.378660 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 21:18:46.572726239 +0000 UTC Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.378768 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.384960 4684 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.386166 4684 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.388655 4684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.397413 4684 log.go:25] "Validated CRI v1 runtime API" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.416585 4684 log.go:25] "Validated CRI v1 image API" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.418866 4684 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.421592 4684 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-10-01-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.421629 4684 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.442157 4684 manager.go:217] Machine: {Timestamp:2026-01-21 10:06:02.439498287 +0000 UTC m=+0.197581314 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:667581df-3edd-489b-b118-69df188c96a2 BootID:74077632-3431-446e-a506-0b618e843835 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:28:d8:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:28:d8:82 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:22:c1:55 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ae:a9:6e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fc:c9:0d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ec:fc:68 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:92:8b:2f:d3:e3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:25:60:f0:ce:c5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.442728 4684 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.442966 4684 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.443666 4684 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.444064 4684 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.444178 4684 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.444559 4684 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.444580 4684 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.444885 4684 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.444970 4684 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.445293 4684 state_mem.go:36] "Initialized new in-memory state store" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.445580 4684 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.446615 4684 kubelet.go:418] "Attempting to sync node with API server" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.446662 4684 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.446715 4684 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.446745 4684 kubelet.go:324] "Adding apiserver pod source" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.446766 4684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.449291 4684 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.449715 4684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.449730 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.449711 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.449859 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.449859 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.450517 4684 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451047 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451080 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451090 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451101 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451117 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451128 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451137 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451153 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451163 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451174 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451187 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451196 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451551 4684 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.451988 4684 server.go:1280] "Started kubelet" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.452633 4684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.452656 4684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.453437 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:02 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.453891 4684 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.456682 4684 server.go:460] "Adding debug handlers to kubelet server" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.456839 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.456885 4684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.457119 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:11:21.226090317 +0000 UTC Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.456997 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cb6ff932e3bde default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:06:02.45195875 +0000 UTC m=+0.210041737,LastTimestamp:2026-01-21 10:06:02.45195875 +0000 UTC m=+0.210041737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.457497 4684 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.457514 4684 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.457538 4684 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.457591 4684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.458167 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.458797 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.458889 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.462788 4684 factory.go:55] Registering systemd factory Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.462849 4684 factory.go:221] Registration of the systemd container factory successfully Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.464550 4684 factory.go:153] Registering CRI-O factory Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.464626 4684 factory.go:221] Registration of the crio container factory successfully Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.464814 4684 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.464859 4684 factory.go:103] Registering Raw factory Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.464889 4684 manager.go:1196] Started watching for new ooms in manager Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.470801 4684 manager.go:319] Starting recovery of all containers Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471520 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471573 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471585 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471598 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471610 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471621 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471631 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471641 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471653 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471662 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471672 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471684 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471696 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471710 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471720 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471760 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471770 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471780 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471790 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471801 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471811 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471822 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471832 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.471841 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472071 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472148 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472167 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472182 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472195 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472208 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472221 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472231 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472241 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472250 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472262 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472272 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472304 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472316 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472343 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472352 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472378 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472389 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472398 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472409 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472419 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472429 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472439 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472448 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472459 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472487 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472496 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472507 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472532 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472557 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472574 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472589 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472600 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472612 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472621 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472630 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472641 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472653 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472668 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472707 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472731 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472750 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472768 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472783 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472798 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472814 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472826 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472840 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472854 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472871 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472886 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472898 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472911 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472924 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472940 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472954 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472967 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472982 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.472996 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473012 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473026 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473085 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473101 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473119 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473135 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473150 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473165 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473179 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473192 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473205 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473216 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473227 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473241 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473255 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473266 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473281 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473298 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473313 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473325 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473339 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473388 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473407 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473423 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473440 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473453 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473466 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473479 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473493 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473508 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473521 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473534 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473551 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473566 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473582 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473601 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473614 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473629 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473642 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473657 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473671 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473684 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473696 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473709 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473721 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473733 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473745 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473758 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473771 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473785 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473799 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473814 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473830 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473844 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473860 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.473873 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475022 4684 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475091 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475121 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475149 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475171 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475192 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475209 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475225 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475243 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475295 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475320 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475338 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475353 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475399 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475415 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475430 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475442 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475456 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475470 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475487 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475505 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475524 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475542 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475559 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475574 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475589 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475606 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475621 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475637 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475652 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.475668 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.476844 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.476914 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.476934 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.476953 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.476971 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.476992 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477010 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477026 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477041 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477055 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477076 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477094 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477109 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477229 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477247 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477263 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477278 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477293 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477308 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477321 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477334 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477347 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477383 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477423 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477437 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477456 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477475 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477495 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477511 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477526 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477540 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477553 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477568 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477581 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477597 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477612 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477628 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477643 4684 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477654 4684 reconstruct.go:97] "Volume reconstruction finished" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.477665 4684 reconciler.go:26] "Reconciler: start to sync state" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.494657 4684 manager.go:324] Recovery completed Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.509273 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.510892 4684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.511563 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.511625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.511642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.512694 4684 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.512724 4684 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.512752 4684 state_mem.go:36] "Initialized new in-memory state store" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.513170 4684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.513221 4684 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.513255 4684 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.513317 4684 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 10:06:02 crc kubenswrapper[4684]: W0121 10:06:02.514201 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.514242 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.558393 4684 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.562579 4684 policy_none.go:49] "None policy: Start" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.563751 4684 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.563799 4684 state_mem.go:35] "Initializing new in-memory state store" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.614414 4684 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.615904 4684 manager.go:334] "Starting Device Plugin manager" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.615970 4684 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.615986 4684 server.go:79] "Starting device plugin registration server" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.616502 4684 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.616527 4684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.616730 4684 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.617006 4684 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.617028 4684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.627051 4684 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.659294 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.717543 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.719017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.719093 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.719110 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.719158 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.719898 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.814994 4684 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.815149 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.817275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.817322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.817346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.817605 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.817984 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818090 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818660 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818813 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.818854 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819420 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819440 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819410 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819550 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819707 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819801 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.819828 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820926 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.820990 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.821011 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822184 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822294 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.822324 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.823029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.823056 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.823071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883020 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883140 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883165 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883184 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883205 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883234 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883334 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883389 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883413 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.883887 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.884829 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.884938 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.884993 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.885095 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.885218 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.920491 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.922056 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.922110 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.922130 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.922171 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:02 crc kubenswrapper[4684]: E0121 10:06:02.922814 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987293 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987429 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987468 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987501 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987530 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987558 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987577 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987586 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987618 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987637 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987664 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987704 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987714 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987725 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987698 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987746 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987759 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987805 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987673 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987826 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987777 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987786 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987794 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987936 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987963 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.988246 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987966 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.987955 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.988423 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:02 crc kubenswrapper[4684]: I0121 10:06:02.988610 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:03 crc kubenswrapper[4684]: E0121 10:06:03.060621 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.155843 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.180926 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.188430 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a6d31e710c6cf94162125acfb05ef58b58390c16764d54257461443bdb2c02de WatchSource:0}: Error finding container a6d31e710c6cf94162125acfb05ef58b58390c16764d54257461443bdb2c02de: Status 404 returned error can't find the container with id a6d31e710c6cf94162125acfb05ef58b58390c16764d54257461443bdb2c02de Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.204989 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-01c8f961ca570cbd959c7b9b46f585e07537dae7b40e6e29bb7cc05153c711ad WatchSource:0}: Error finding container 01c8f961ca570cbd959c7b9b46f585e07537dae7b40e6e29bb7cc05153c711ad: Status 404 returned error can't find the container with id 01c8f961ca570cbd959c7b9b46f585e07537dae7b40e6e29bb7cc05153c711ad Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.209842 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.225751 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.229039 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b697352a17c676d95194a2a38f67b29edcfaf107c896a635874ff7a599f97e7d WatchSource:0}: Error finding container b697352a17c676d95194a2a38f67b29edcfaf107c896a635874ff7a599f97e7d: Status 404 returned error can't find the container with id b697352a17c676d95194a2a38f67b29edcfaf107c896a635874ff7a599f97e7d Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.231526 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.243387 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a10491b3456fe12cacc92fcf5bf3593689bc514fbd2252ef15482d06ed813e07 WatchSource:0}: Error finding container a10491b3456fe12cacc92fcf5bf3593689bc514fbd2252ef15482d06ed813e07: Status 404 returned error can't find the container with id a10491b3456fe12cacc92fcf5bf3593689bc514fbd2252ef15482d06ed813e07 Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.251849 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-aa4ace4060bcf8a05ca8a6a107b8e5d84f6cd6c296edc467eddcae349e898367 WatchSource:0}: Error finding container aa4ace4060bcf8a05ca8a6a107b8e5d84f6cd6c296edc467eddcae349e898367: Status 404 returned error can't find the container with id aa4ace4060bcf8a05ca8a6a107b8e5d84f6cd6c296edc467eddcae349e898367 Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.285503 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:03 crc kubenswrapper[4684]: E0121 10:06:03.285630 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.321107 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:03 crc kubenswrapper[4684]: E0121 10:06:03.321241 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.323491 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.324830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.324873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.324886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.324918 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:03 crc kubenswrapper[4684]: E0121 10:06:03.325441 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.454782 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.457883 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:12:20.060984451 +0000 UTC Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.518378 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a6d31e710c6cf94162125acfb05ef58b58390c16764d54257461443bdb2c02de"} Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.519659 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"aa4ace4060bcf8a05ca8a6a107b8e5d84f6cd6c296edc467eddcae349e898367"} Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.520570 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a10491b3456fe12cacc92fcf5bf3593689bc514fbd2252ef15482d06ed813e07"} Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.521594 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b697352a17c676d95194a2a38f67b29edcfaf107c896a635874ff7a599f97e7d"} Jan 21 10:06:03 crc kubenswrapper[4684]: I0121 10:06:03.522946 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"01c8f961ca570cbd959c7b9b46f585e07537dae7b40e6e29bb7cc05153c711ad"} Jan 21 10:06:03 crc kubenswrapper[4684]: W0121 10:06:03.816149 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:03 crc kubenswrapper[4684]: E0121 10:06:03.816607 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:03 crc kubenswrapper[4684]: E0121 10:06:03.862315 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 21 10:06:04 crc kubenswrapper[4684]: W0121 10:06:04.036834 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:04 crc kubenswrapper[4684]: E0121 10:06:04.036915 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.125783 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.127431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.127467 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.127483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.127510 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:04 crc kubenswrapper[4684]: E0121 10:06:04.127974 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.455052 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.458283 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:04:25.026220442 +0000 UTC Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.470760 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:06:04 crc kubenswrapper[4684]: E0121 10:06:04.471743 4684 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.530508 4684 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906" exitCode=0 Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.530646 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.530753 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.532633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.532686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.532705 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.536753 4684 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="37c58aab8fa372959ddcec6b4b719023c50d5a59a91210462c4343784b542138" exitCode=0 Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.536902 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.536915 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"37c58aab8fa372959ddcec6b4b719023c50d5a59a91210462c4343784b542138"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.538183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.538253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.538280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.543123 4684 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599" exitCode=0 Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.543261 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.543291 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.545686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.545734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.545751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.549581 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.549672 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.549695 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.549713 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.549855 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.551314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.551443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.551464 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.554328 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319" exitCode=0 Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.554426 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319"} Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.554527 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.555691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.555747 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.555766 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.570076 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.571499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.571552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:04 crc kubenswrapper[4684]: I0121 10:06:04.571600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.458617 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:48:28.924670411 +0000 UTC Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.559739 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.559796 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.559813 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.559929 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.561330 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.561383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.561397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.563726 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.563765 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.563780 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.563793 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.565677 4684 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767" exitCode=0 Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.565734 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.565847 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.567084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.567111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.567122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.570253 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9547155c550630a738aca269d69339eb4edd218b4656f326d97b51e8dc9ad1ae"} Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.570344 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.570357 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.571491 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.571520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.571538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.571497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.571602 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.571614 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.729526 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.732133 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.732178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.732190 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:05 crc kubenswrapper[4684]: I0121 10:06:05.732228 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.458766 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:49:07.208338076 +0000 UTC Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.577961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8"} Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.578119 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.579505 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.579540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.579551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.580248 4684 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c" exitCode=0 Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.580310 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c"} Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.580379 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.580388 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.580421 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.580771 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581298 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581326 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581440 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:06 crc kubenswrapper[4684]: I0121 10:06:06.581611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.274422 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.459921 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:09:45.851219532 +0000 UTC Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588194 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7"} Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588278 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b"} Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588296 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588300 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10"} Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588308 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588624 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.588501 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5"} Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.589521 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.589566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.589584 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.589703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.589724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:07 crc kubenswrapper[4684]: I0121 10:06:07.589733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.460727 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:51:10.250308943 +0000 UTC Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.595557 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c"} Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.595579 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.595719 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.595743 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.597103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.597139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.597153 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.597161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.597172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.597174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.689746 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:06:08 crc kubenswrapper[4684]: I0121 10:06:08.744449 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.461408 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:55:17.782360695 +0000 UTC Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.599002 4684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.599051 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.599073 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.600689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.600751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.600767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.600915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.601491 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.601520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:09 crc kubenswrapper[4684]: I0121 10:06:09.761884 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.033039 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.461990 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:30:59.011999462 +0000 UTC Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.602012 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.602012 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.602962 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.602995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.603005 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.603611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.603689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.603714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:10 crc kubenswrapper[4684]: I0121 10:06:10.999618 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 10:06:11 crc kubenswrapper[4684]: I0121 10:06:11.462528 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:11:34.54733334 +0000 UTC Jan 21 10:06:11 crc kubenswrapper[4684]: I0121 10:06:11.604777 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:11 crc kubenswrapper[4684]: I0121 10:06:11.605874 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:11 crc kubenswrapper[4684]: I0121 10:06:11.605925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:11 crc kubenswrapper[4684]: I0121 10:06:11.605946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:12 crc kubenswrapper[4684]: I0121 10:06:12.224937 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:12 crc kubenswrapper[4684]: I0121 10:06:12.225229 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:12 crc kubenswrapper[4684]: I0121 10:06:12.226725 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:12 crc kubenswrapper[4684]: I0121 10:06:12.226765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:12 crc kubenswrapper[4684]: I0121 10:06:12.226778 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:12 crc kubenswrapper[4684]: I0121 10:06:12.463172 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:03:52.998388591 +0000 UTC Jan 21 10:06:12 crc kubenswrapper[4684]: E0121 10:06:12.627426 4684 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.095283 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.095545 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.097407 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.097466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.097488 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.104005 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.463968 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:16:16.824281151 +0000 UTC Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.611061 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.612936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.613018 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.613047 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:13 crc kubenswrapper[4684]: I0121 10:06:13.619554 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.255398 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.464453 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:04:39.523995652 +0000 UTC Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.504850 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.613258 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.614324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.614385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:14 crc kubenswrapper[4684]: I0121 10:06:14.614398 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.455656 4684 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:06:15 crc kubenswrapper[4684]: E0121 10:06:15.463141 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.465429 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:27:07.742367404 +0000 UTC Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.615523 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.616480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.616611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.616703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:15 crc kubenswrapper[4684]: E0121 10:06:15.733255 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 10:06:15 crc kubenswrapper[4684]: W0121 10:06:15.859471 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:06:15 crc kubenswrapper[4684]: I0121 10:06:15.859582 4684 trace.go:236] Trace[544773361]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:06:05.857) (total time: 10001ms): Jan 21 10:06:15 crc kubenswrapper[4684]: Trace[544773361]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:06:15.859) Jan 21 10:06:15 crc kubenswrapper[4684]: Trace[544773361]: [10.001661335s] [10.001661335s] END Jan 21 10:06:15 crc kubenswrapper[4684]: E0121 10:06:15.859619 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:06:16 crc kubenswrapper[4684]: W0121 10:06:16.078137 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:06:16 crc kubenswrapper[4684]: W0121 10:06:16.078155 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.078234 4684 trace.go:236] Trace[57112080]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:06:06.076) (total time: 10001ms): Jan 21 10:06:16 crc kubenswrapper[4684]: Trace[57112080]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:06:16.078) Jan 21 10:06:16 crc kubenswrapper[4684]: Trace[57112080]: [10.001707039s] [10.001707039s] END Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.078291 4684 trace.go:236] Trace[1517808291]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:06:06.076) (total time: 10001ms): Jan 21 10:06:16 crc kubenswrapper[4684]: Trace[1517808291]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:06:16.078) Jan 21 10:06:16 crc kubenswrapper[4684]: Trace[1517808291]: [10.001802813s] [10.001802813s] END Jan 21 10:06:16 crc kubenswrapper[4684]: E0121 10:06:16.083061 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:06:16 crc kubenswrapper[4684]: E0121 10:06:16.083233 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:06:16 crc kubenswrapper[4684]: W0121 10:06:16.354408 4684 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.354527 4684 trace.go:236] Trace[1149557089]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:06:06.353) (total time: 10001ms): Jan 21 10:06:16 crc kubenswrapper[4684]: Trace[1149557089]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:06:16.354) Jan 21 10:06:16 crc kubenswrapper[4684]: Trace[1149557089]: [10.001479567s] [10.001479567s] END Jan 21 10:06:16 crc kubenswrapper[4684]: E0121 10:06:16.354559 4684 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.465940 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:52:49.279881249 +0000 UTC Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.750521 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.750586 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.765303 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 10:06:16 crc kubenswrapper[4684]: I0121 10:06:16.765400 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 10:06:17 crc kubenswrapper[4684]: I0121 10:06:17.466783 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:54:24.337169193 +0000 UTC Jan 21 10:06:17 crc kubenswrapper[4684]: I0121 10:06:17.505495 4684 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:06:17 crc kubenswrapper[4684]: I0121 10:06:17.505602 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.467408 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:27:52.664031731 +0000 UTC Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.753572 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.753775 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.755082 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.755181 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.755199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.761570 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.934778 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.936810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.936847 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.936858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:18 crc kubenswrapper[4684]: I0121 10:06:18.936886 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:18 crc kubenswrapper[4684]: E0121 10:06:18.942843 4684 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 10:06:19 crc kubenswrapper[4684]: I0121 10:06:19.467599 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 22:06:34.721062255 +0000 UTC Jan 21 10:06:19 crc kubenswrapper[4684]: I0121 10:06:19.632646 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:19 crc kubenswrapper[4684]: I0121 10:06:19.633620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:19 crc kubenswrapper[4684]: I0121 10:06:19.633678 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:19 crc kubenswrapper[4684]: I0121 10:06:19.633715 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:20 crc kubenswrapper[4684]: I0121 10:06:20.064922 4684 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 10:06:20 crc kubenswrapper[4684]: I0121 10:06:20.468585 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:27:05.480411716 +0000 UTC Jan 21 10:06:20 crc kubenswrapper[4684]: I0121 10:06:20.658625 4684 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.041350 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.055160 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.458186 4684 apiserver.go:52] "Watching apiserver" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.461928 4684 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.462311 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.462832 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.462843 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.462958 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.462947 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.463194 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.463423 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.463636 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.464018 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.464128 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.465844 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467431 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467537 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467557 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467577 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467619 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467432 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467727 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.467917 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.468721 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:29:31.601007284 +0000 UTC Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.502253 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.515904 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.532750 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.546511 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.559088 4684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.567605 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.579504 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.591627 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.601168 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.650150 4684 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.760108 4684 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.765889 4684 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.785165 4684 csr.go:261] certificate signing request csr-85xxx is approved, waiting to be issued Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.793571 4684 csr.go:257] certificate signing request csr-85xxx is issued Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868330 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868396 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868420 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868438 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868454 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868489 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868505 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868521 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868534 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868549 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868567 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868598 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868620 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868635 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868650 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868664 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868704 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868719 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868745 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868770 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868788 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868802 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868815 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868830 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868847 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868861 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868876 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868890 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868906 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868922 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868938 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868952 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.868965 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869003 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869019 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869034 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869049 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869064 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869079 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869094 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869130 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869146 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869161 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869175 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869190 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869204 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869218 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869232 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869256 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869271 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869285 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869291 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869311 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869401 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869426 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869444 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869461 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869478 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869493 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869511 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869530 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869544 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869559 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869576 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869574 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869592 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869625 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869641 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869660 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869679 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869698 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869726 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869734 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869742 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869760 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869784 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869802 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869816 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869831 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869846 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869872 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869888 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869904 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869920 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869937 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869961 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.869986 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870004 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870019 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870039 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870055 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870071 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870088 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870107 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870121 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870136 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870152 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870166 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870181 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870196 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870211 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870226 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870263 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870277 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870307 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870324 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870339 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870372 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870394 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870420 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870452 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870467 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870483 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870514 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870582 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.870943 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.871076 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.871856 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.872233 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.872499 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.872686 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.872877 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.873036 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.873072 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.873260 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.873476 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.873645 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.873784 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.874042 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.874154 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.874227 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.874510 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.874712 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.875084 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.875650 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.876115 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.876805 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.877450 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.878891 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879031 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879068 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879089 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879109 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879136 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879157 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879181 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879204 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879247 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879287 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879311 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879342 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879391 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879429 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879458 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879483 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879511 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879533 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879556 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879578 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879600 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879621 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879643 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879665 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879696 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879721 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879753 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879813 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879853 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879877 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879899 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879928 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879950 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879971 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879996 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880020 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880042 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880068 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880092 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880118 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880159 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880182 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880205 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880237 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880260 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880284 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880306 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880336 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880374 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880400 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880425 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880448 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880474 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880506 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880538 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880560 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880586 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880610 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880632 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880654 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880678 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880702 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880724 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880750 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880860 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880887 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880911 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880934 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880991 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881018 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881044 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881070 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881096 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881119 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881142 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881166 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881193 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881225 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881267 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881290 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881316 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881340 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881380 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.884897 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.884984 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885036 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885066 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885099 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885134 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885171 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885204 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885233 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885259 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885288 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885322 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885347 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885393 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894141 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894180 4684 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894200 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894226 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894241 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894257 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894273 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894290 4684 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894304 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894321 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894342 4684 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894376 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894393 4684 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894409 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894431 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894446 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894461 4684 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894476 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894493 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894508 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894523 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894543 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894558 4684 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894573 4684 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894586 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894601 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894615 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894627 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879247 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879392 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.879812 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880394 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880565 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880714 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.880858 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.881013 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.885870 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.886216 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.886421 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.886597 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.886634 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.886907 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.887014 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.887026 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.887407 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.887718 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.887780 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.888027 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.888049 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.888493 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.888875 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.889068 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.889736 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.889892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.890465 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.891446 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.892712 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.892800 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.893624 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.893619 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.893892 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894318 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.894743 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.903164 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.903221 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.903318 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.912286 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.912331 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.912713 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.912727 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913036 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913056 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913213 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913309 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913385 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913413 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913574 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.913760 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.914146 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.914149 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.916956 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.917255 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.917736 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919794 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919990 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.920029 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.920167 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.920234 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.917979 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.918155 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.918288 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.895407 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.918404 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.895506 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.918698 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.918773 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919111 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919262 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919407 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919543 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.919632 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.920712 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.920876 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.921090 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.921269 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.921451 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.921593 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.924676 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.924862 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.927726 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.927933 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.932919 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.933298 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.933453 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.933878 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.936932 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.937143 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.937495 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.937639 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.937869 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.938185 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.938458 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.938533 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.938714 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.938881 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.939423 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:06:22.439395188 +0000 UTC m=+20.197478155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.939634 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.939771 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.939844 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940045 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.940196 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940385 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940073 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940403 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940097 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940503 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940519 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940520 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.940584 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.940640 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:22.440628097 +0000 UTC m=+20.198711064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.941426 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.941435 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.941468 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:22.441459253 +0000 UTC m=+20.199542420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.941700 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.941955 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.942232 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.942642 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.942076 4684 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.942971 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.943232 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.943567 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.943861 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.944084 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.944298 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.944529 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.944565 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.944706 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.945134 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.945831 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.942864 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.950674 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.950929 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951071 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951212 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951519 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951570 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951695 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951755 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951851 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.951813 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.952198 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.952382 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.952611 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.952649 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.952811 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.953009 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.953016 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.953202 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.953529 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.955325 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.955336 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.955932 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.956254 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.961664 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.962436 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.963091 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.964922 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.965249 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.966134 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.966588 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.966870 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.968849 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.968913 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.969203 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.969642 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.970192 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.970687 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.972542 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.972719 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.978828 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.978868 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.978887 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.978976 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:22.47894708 +0000 UTC m=+20.237030237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.983823 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.988882 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.989000 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995197 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995254 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995383 4684 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995397 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995406 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995416 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995426 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995436 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995446 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995455 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995465 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995475 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995509 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995520 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995529 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995538 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995548 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995557 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995566 4684 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995575 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995585 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995595 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995605 4684 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995616 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995627 4684 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995638 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995651 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995661 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995670 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995681 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995692 4684 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995703 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995713 4684 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995723 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995733 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995742 4684 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995752 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995762 4684 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995773 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995783 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995793 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995802 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995811 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995820 4684 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995828 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995837 4684 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995847 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995855 4684 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995864 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995884 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995892 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995901 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995909 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995920 4684 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995929 4684 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995937 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995947 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995957 4684 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995972 4684 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.995984 4684 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996004 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996015 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996026 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996037 4684 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996049 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996060 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996070 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996080 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996123 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996131 4684 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996140 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996149 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996186 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996195 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996206 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996216 4684 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996225 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996234 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996244 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996254 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996263 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996273 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996283 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996299 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996315 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996324 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996335 4684 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996343 4684 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996351 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996374 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996383 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996392 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996401 4684 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996409 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996419 4684 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996427 4684 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996448 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996457 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996466 4684 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996475 4684 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996483 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996491 4684 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996500 4684 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996508 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996517 4684 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996525 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996535 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996543 4684 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996553 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996562 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996571 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996579 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996588 4684 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996598 4684 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996607 4684 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996619 4684 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996628 4684 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996637 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996646 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996654 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996663 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996671 4684 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996680 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996696 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996704 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996713 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996722 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996730 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996738 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996746 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996755 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996764 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996774 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996783 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996791 4684 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996799 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.996755 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.996835 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:21 crc kubenswrapper[4684]: E0121 10:06:21.996849 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996882 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996902 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996921 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:21 crc kubenswrapper[4684]: I0121 10:06:21.996932 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:21.996975 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:22.496958967 +0000 UTC m=+20.255041934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.996974 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.996993 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997050 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997065 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997078 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997091 4684 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997104 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997118 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997129 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997143 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997154 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997168 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997180 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997198 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997208 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997221 4684 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997234 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997246 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997258 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997270 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997281 4684 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997293 4684 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997305 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997316 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997327 4684 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997338 4684 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:21.997526 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.008096 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.008598 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.015795 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.024459 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.033454 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.075503 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.084592 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.089517 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098157 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56240->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098229 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56240->192.168.126.11:17697: read: connection reset by peer" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098470 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098504 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098517 4684 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098549 4684 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098625 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098646 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098766 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.098780 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.113122 4684 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 10:06:22 crc kubenswrapper[4684]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 21 10:06:22 crc kubenswrapper[4684]: set -o allexport Jan 21 10:06:22 crc kubenswrapper[4684]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 21 10:06:22 crc kubenswrapper[4684]: source /etc/kubernetes/apiserver-url.env Jan 21 10:06:22 crc kubenswrapper[4684]: else Jan 21 10:06:22 crc kubenswrapper[4684]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 21 10:06:22 crc kubenswrapper[4684]: exit 1 Jan 21 10:06:22 crc kubenswrapper[4684]: fi Jan 21 10:06:22 crc kubenswrapper[4684]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 21 10:06:22 crc kubenswrapper[4684]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 10:06:22 crc kubenswrapper[4684]: > logger="UnhandledError" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.114441 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.123711 4684 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 10:06:22 crc kubenswrapper[4684]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 10:06:22 crc kubenswrapper[4684]: if [[ -f "/env/_master" ]]; then Jan 21 10:06:22 crc kubenswrapper[4684]: set -o allexport Jan 21 10:06:22 crc kubenswrapper[4684]: source "/env/_master" Jan 21 10:06:22 crc kubenswrapper[4684]: set +o allexport Jan 21 10:06:22 crc kubenswrapper[4684]: fi Jan 21 10:06:22 crc kubenswrapper[4684]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 21 10:06:22 crc kubenswrapper[4684]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 21 10:06:22 crc kubenswrapper[4684]: ho_enable="--enable-hybrid-overlay" Jan 21 10:06:22 crc kubenswrapper[4684]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 21 10:06:22 crc kubenswrapper[4684]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 21 10:06:22 crc kubenswrapper[4684]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 21 10:06:22 crc kubenswrapper[4684]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 10:06:22 crc kubenswrapper[4684]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 21 10:06:22 crc kubenswrapper[4684]: --webhook-host=127.0.0.1 \ Jan 21 10:06:22 crc kubenswrapper[4684]: --webhook-port=9743 \ Jan 21 10:06:22 crc kubenswrapper[4684]: ${ho_enable} \ Jan 21 10:06:22 crc kubenswrapper[4684]: --enable-interconnect \ Jan 21 10:06:22 crc kubenswrapper[4684]: --disable-approver \ Jan 21 10:06:22 crc kubenswrapper[4684]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 21 10:06:22 crc kubenswrapper[4684]: --wait-for-kubernetes-api=200s \ Jan 21 10:06:22 crc kubenswrapper[4684]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 21 10:06:22 crc kubenswrapper[4684]: --loglevel="${LOGLEVEL}" Jan 21 10:06:22 crc kubenswrapper[4684]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 10:06:22 crc kubenswrapper[4684]: > logger="UnhandledError" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.123916 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.128023 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.129304 4684 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 10:06:22 crc kubenswrapper[4684]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 21 10:06:22 crc kubenswrapper[4684]: if [[ -f "/env/_master" ]]; then Jan 21 10:06:22 crc kubenswrapper[4684]: set -o allexport Jan 21 10:06:22 crc kubenswrapper[4684]: source "/env/_master" Jan 21 10:06:22 crc kubenswrapper[4684]: set +o allexport Jan 21 10:06:22 crc kubenswrapper[4684]: fi Jan 21 10:06:22 crc kubenswrapper[4684]: Jan 21 10:06:22 crc kubenswrapper[4684]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 21 10:06:22 crc kubenswrapper[4684]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 21 10:06:22 crc kubenswrapper[4684]: --disable-webhook \ Jan 21 10:06:22 crc kubenswrapper[4684]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 21 10:06:22 crc kubenswrapper[4684]: --loglevel="${LOGLEVEL}" Jan 21 10:06:22 crc kubenswrapper[4684]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 21 10:06:22 crc kubenswrapper[4684]: > logger="UnhandledError" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.131580 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.167859 4684 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.167927 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.303542 4684 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.338229 4684 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.384114 4684 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384557 4684 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384653 4684 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384573 4684 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384580 4684 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384578 4684 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: very short watch: pkg/kubelet/config/apiserver.go:66: Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384579 4684 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384591 4684 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384597 4684 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384611 4684 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384617 4684 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384621 4684 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: W0121 10:06:22.384713 4684 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.473702 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:22:11.836055253 +0000 UTC Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.508184 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508310 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:06:23.508285841 +0000 UTC m=+21.266368808 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.508477 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.508541 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508677 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508702 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508723 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508729 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:23.508717295 +0000 UTC m=+21.266800262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508738 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508774 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:23.508763626 +0000 UTC m=+21.266846593 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508787 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.508796 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508810 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508829 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.508835 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508868 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:23.508854989 +0000 UTC m=+21.266937956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508907 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: E0121 10:06:22.508961 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:23.508948442 +0000 UTC m=+21.267031409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.517525 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.518089 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.519051 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.519755 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.520452 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.521004 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.521691 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.522348 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.523070 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.523646 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.524201 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.528776 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.529516 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.530599 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.531109 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.532094 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.532703 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.533054 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.534101 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.534703 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.535568 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.536088 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.536554 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.537681 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.538139 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.539224 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.539912 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.540879 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.541570 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.542351 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.542838 4684 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.542943 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.544651 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.545753 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.546155 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.547641 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.548661 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.549140 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.549571 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.550125 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.550773 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.551898 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.552509 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.553505 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.554108 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.554947 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.555493 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.556716 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.557413 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.558214 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.558706 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.559537 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.560031 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.560587 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.561546 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.570142 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.581637 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.591710 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.611433 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.629887 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.640298 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.642268 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8" exitCode=255 Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.642375 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8"} Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.643279 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4686b11ba91c3fd7b0336638072583ed03128f28ce9aec1b9b60270a2381018e"} Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.644838 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6f8fcbd590f6bf9fed30ed2a8822bf60f8d0df6fc434771017fa1afd6f7e7613"} Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.646310 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"874534fbe5d4b8cfbfa0f3043b39656df45873a39e8f6862cc3c65139186d01b"} Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.650063 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.659335 4684 scope.go:117] "RemoveContainer" containerID="f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.680718 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.714000 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.724202 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.795027 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 10:01:21 +0000 UTC, rotation deadline is 2026-12-05 02:42:55.726160395 +0000 UTC Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.795110 4684 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7624h36m32.931053722s for next certificate rotation Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.800166 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:22 crc kubenswrapper[4684]: I0121 10:06:22.964665 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.010417 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.032648 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.211930 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.233600 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.308127 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.432447 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.449498 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.474128 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:56:17.548284914 +0000 UTC Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.479410 4684 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.480466 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gshl8","openshift-ovn-kubernetes/ovnkube-node-6vjwl","openshift-dns/node-resolver-xpk8b","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-sff6s","openshift-multus/multus-6jwd4"] Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.481033 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.481901 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.482992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.483459 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.483639 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.484573 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.484582 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.487657 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.487890 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.487905 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489153 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489436 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489485 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489667 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489800 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489917 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489672 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490077 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.489917 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490345 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490397 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490458 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490477 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490526 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490398 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490586 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.490939 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.500792 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.513619 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.513665 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.513717 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.513748 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.513901 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.514080 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.515861 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.527621 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.527716 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:06:25.527697899 +0000 UTC m=+23.285780866 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.527788 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.527813 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.527836 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2704b9e-474a-466a-b78c-d136a2f95a3b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.527919 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.527938 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-system-cni-dir\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528000 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528022 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528037 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528126 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:25.528079271 +0000 UTC m=+23.286162238 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528190 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-os-release\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528237 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528280 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528324 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-cnibin\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528349 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:25.52833663 +0000 UTC m=+23.286419597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528392 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/af519c74-92e1-4b1e-84a9-148aa5d0aa2e-hosts-file\") pod \"node-resolver-xpk8b\" (UID: \"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\") " pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528464 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528481 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c2704b9e-474a-466a-b78c-d136a2f95a3b-cni-binary-copy\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528488 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528511 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95j9\" (UniqueName: \"kubernetes.io/projected/af519c74-92e1-4b1e-84a9-148aa5d0aa2e-kube-api-access-b95j9\") pod \"node-resolver-xpk8b\" (UID: \"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\") " pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528519 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528590 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:25.528579357 +0000 UTC m=+23.286662534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528631 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.528695 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmbh\" (UniqueName: \"kubernetes.io/projected/c2704b9e-474a-466a-b78c-d136a2f95a3b-kube-api-access-rsmbh\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528710 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: E0121 10:06:23.528759 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:25.528747823 +0000 UTC m=+23.286830790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.542284 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.553600 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.580143 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.594641 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.650998 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-etc-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651041 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-etc-kubernetes\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651059 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-ovn-kubernetes\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t888c\" (UniqueName: \"kubernetes.io/projected/1e7ac4c6-b960-418c-b057-e55d95a213cd-kube-api-access-t888c\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651103 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-slash\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651122 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-env-overrides\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651141 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h68cm\" (UniqueName: \"kubernetes.io/projected/8dac888b-051f-405a-8c23-60c205d2aecc-kube-api-access-h68cm\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651163 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-cni-multus\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651181 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-systemd-units\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651199 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-log-socket\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651225 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-hostroot\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651244 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-k8s-cni-cncf-io\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651259 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-var-lib-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651278 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-system-cni-dir\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651294 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-os-release\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651318 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db9hs\" (UniqueName: \"kubernetes.io/projected/55d2c484-cf10-46b5-913f-2a033a2ff5c1-kube-api-access-db9hs\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651335 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e7ac4c6-b960-418c-b057-e55d95a213cd-cni-binary-copy\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651350 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651393 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651414 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-script-lib\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651431 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-bin\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651445 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-netd\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651377 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-system-cni-dir\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651522 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/af519c74-92e1-4b1e-84a9-148aa5d0aa2e-hosts-file\") pod \"node-resolver-xpk8b\" (UID: \"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\") " pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651580 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-conf-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651704 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-ovn\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651742 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-os-release\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651747 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55d2c484-cf10-46b5-913f-2a033a2ff5c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651782 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/af519c74-92e1-4b1e-84a9-148aa5d0aa2e-hosts-file\") pod \"node-resolver-xpk8b\" (UID: \"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\") " pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651853 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-daemon-config\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651910 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-netns\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651960 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-socket-dir-parent\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.651986 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-cni-bin\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-kubelet\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652041 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsmbh\" (UniqueName: \"kubernetes.io/projected/c2704b9e-474a-466a-b78c-d136a2f95a3b-kube-api-access-rsmbh\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652066 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/55d2c484-cf10-46b5-913f-2a033a2ff5c1-rootfs\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652092 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55d2c484-cf10-46b5-913f-2a033a2ff5c1-proxy-tls\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652117 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-cni-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652139 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-cnibin\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652162 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-kubelet\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652188 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-systemd\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652212 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-config\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652257 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2704b9e-474a-466a-b78c-d136a2f95a3b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652282 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-system-cni-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652306 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-netns\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652335 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652378 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-os-release\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652405 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-multus-certs\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652431 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-cnibin\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652457 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-node-log\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652821 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652823 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c2704b9e-474a-466a-b78c-d136a2f95a3b-cnibin\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.652971 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c2704b9e-474a-466a-b78c-d136a2f95a3b-cni-binary-copy\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.653044 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95j9\" (UniqueName: \"kubernetes.io/projected/af519c74-92e1-4b1e-84a9-148aa5d0aa2e-kube-api-access-b95j9\") pod \"node-resolver-xpk8b\" (UID: \"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\") " pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.653087 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dac888b-051f-405a-8c23-60c205d2aecc-ovn-node-metrics-cert\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.653565 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c2704b9e-474a-466a-b78c-d136a2f95a3b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.653895 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c2704b9e-474a-466a-b78c-d136a2f95a3b-cni-binary-copy\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.657154 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb"} Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.661868 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.664827 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753"} Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.664889 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.667209 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.669405 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83"} Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.669467 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef"} Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.673565 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsmbh\" (UniqueName: \"kubernetes.io/projected/c2704b9e-474a-466a-b78c-d136a2f95a3b-kube-api-access-rsmbh\") pod \"multus-additional-cni-plugins-gshl8\" (UID: \"c2704b9e-474a-466a-b78c-d136a2f95a3b\") " pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.685804 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95j9\" (UniqueName: \"kubernetes.io/projected/af519c74-92e1-4b1e-84a9-148aa5d0aa2e-kube-api-access-b95j9\") pod \"node-resolver-xpk8b\" (UID: \"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\") " pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.690712 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.697741 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.712900 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.725516 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.741729 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.742699 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753747 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db9hs\" (UniqueName: \"kubernetes.io/projected/55d2c484-cf10-46b5-913f-2a033a2ff5c1-kube-api-access-db9hs\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753799 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e7ac4c6-b960-418c-b057-e55d95a213cd-cni-binary-copy\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753830 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-k8s-cni-cncf-io\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753861 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-var-lib-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753888 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753915 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753940 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-script-lib\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753962 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-conf-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.753983 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-ovn\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754004 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-bin\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754023 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-netd\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754073 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55d2c484-cf10-46b5-913f-2a033a2ff5c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754101 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-daemon-config\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754132 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-netns\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754154 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-cni-bin\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754182 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-kubelet\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754208 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-socket-dir-parent\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754231 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/55d2c484-cf10-46b5-913f-2a033a2ff5c1-rootfs\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754254 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55d2c484-cf10-46b5-913f-2a033a2ff5c1-proxy-tls\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754285 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-kubelet\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754320 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-cni-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754354 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-cnibin\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754395 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-system-cni-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754415 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-netns\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754434 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-systemd\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754452 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-config\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754505 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-multus-certs\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754542 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-os-release\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754565 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-node-log\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754584 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dac888b-051f-405a-8c23-60c205d2aecc-ovn-node-metrics-cert\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754632 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-etc-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754654 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-etc-kubernetes\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754677 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-ovn-kubernetes\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754699 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-env-overrides\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754722 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h68cm\" (UniqueName: \"kubernetes.io/projected/8dac888b-051f-405a-8c23-60c205d2aecc-kube-api-access-h68cm\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754744 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t888c\" (UniqueName: \"kubernetes.io/projected/1e7ac4c6-b960-418c-b057-e55d95a213cd-kube-api-access-t888c\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754769 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-slash\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754826 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-log-socket\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754850 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-cni-multus\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754875 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-systemd-units\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754900 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-hostroot\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.754891 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-kubelet\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755004 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-os-release\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755046 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-node-log\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755651 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-multus-certs\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-cni-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755871 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1e7ac4c6-b960-418c-b057-e55d95a213cd-cni-binary-copy\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755924 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-k8s-cni-cncf-io\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755934 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-etc-kubernetes\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755961 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-var-lib-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.755997 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756028 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756065 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-ovn-kubernetes\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756085 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-config\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756152 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-cnibin\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756174 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-etc-openvswitch\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756203 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-system-cni-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756210 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-kubelet\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756237 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-netns\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756239 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-run-netns\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756271 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-log-socket\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756273 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-cni-bin\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756311 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-systemd\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756343 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-systemd-units\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756394 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-host-var-lib-cni-multus\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756435 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-bin\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756465 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-conf-dir\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756497 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-ovn\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756546 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-hostroot\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756583 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-slash\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756604 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-netd\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756569 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-env-overrides\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756637 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/55d2c484-cf10-46b5-913f-2a033a2ff5c1-rootfs\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756608 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-script-lib\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756664 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-socket-dir-parent\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.756865 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1e7ac4c6-b960-418c-b057-e55d95a213cd-multus-daemon-config\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.757637 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55d2c484-cf10-46b5-913f-2a033a2ff5c1-mcd-auth-proxy-config\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.759100 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dac888b-051f-405a-8c23-60c205d2aecc-ovn-node-metrics-cert\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.759884 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.769935 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55d2c484-cf10-46b5-913f-2a033a2ff5c1-proxy-tls\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.777355 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db9hs\" (UniqueName: \"kubernetes.io/projected/55d2c484-cf10-46b5-913f-2a033a2ff5c1-kube-api-access-db9hs\") pod \"machine-config-daemon-sff6s\" (UID: \"55d2c484-cf10-46b5-913f-2a033a2ff5c1\") " pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.779019 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.779638 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h68cm\" (UniqueName: \"kubernetes.io/projected/8dac888b-051f-405a-8c23-60c205d2aecc-kube-api-access-h68cm\") pod \"ovnkube-node-6vjwl\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.792150 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.794206 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t888c\" (UniqueName: \"kubernetes.io/projected/1e7ac4c6-b960-418c-b057-e55d95a213cd-kube-api-access-t888c\") pod \"multus-6jwd4\" (UID: \"1e7ac4c6-b960-418c-b057-e55d95a213cd\") " pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.797628 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xpk8b" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.803936 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.804519 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gshl8" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.804633 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.812572 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:23 crc kubenswrapper[4684]: W0121 10:06:23.815743 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2704b9e_474a_466a_b78c_d136a2f95a3b.slice/crio-7be72d76b62c4af9b7bda5b9a6b4fcc6990a19c80b77f130d48394d05c9b3f43 WatchSource:0}: Error finding container 7be72d76b62c4af9b7bda5b9a6b4fcc6990a19c80b77f130d48394d05c9b3f43: Status 404 returned error can't find the container with id 7be72d76b62c4af9b7bda5b9a6b4fcc6990a19c80b77f130d48394d05c9b3f43 Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.817766 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.818587 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6jwd4" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.829347 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.829413 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.852665 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: W0121 10:06:23.864357 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7ac4c6_b960_418c_b057_e55d95a213cd.slice/crio-5b3693ce0ab2ec138718cfda93f658ab119c1c91c510ed19054c1f02be27c19d WatchSource:0}: Error finding container 5b3693ce0ab2ec138718cfda93f658ab119c1c91c510ed19054c1f02be27c19d: Status 404 returned error can't find the container with id 5b3693ce0ab2ec138718cfda93f658ab119c1c91c510ed19054c1f02be27c19d Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.877691 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.893942 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.910256 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:23 crc kubenswrapper[4684]: I0121 10:06:23.963968 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.457226 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hktl5"] Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.457643 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.465043 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.465046 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.465191 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.465274 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.477002 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:06:53.433565629 +0000 UTC Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.486834 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.509294 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.512500 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.519774 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.537994 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.553859 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.569790 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6d6e52c-91fd-469d-af23-45c3833eb9d7-host\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.569833 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8lj\" (UniqueName: \"kubernetes.io/projected/d6d6e52c-91fd-469d-af23-45c3833eb9d7-kube-api-access-dp8lj\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.569860 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d6d6e52c-91fd-469d-af23-45c3833eb9d7-serviceca\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.593702 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.621397 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.671160 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8lj\" (UniqueName: \"kubernetes.io/projected/d6d6e52c-91fd-469d-af23-45c3833eb9d7-kube-api-access-dp8lj\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.671219 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6d6e52c-91fd-469d-af23-45c3833eb9d7-host\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.671255 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d6d6e52c-91fd-469d-af23-45c3833eb9d7-serviceca\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.671467 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d6d6e52c-91fd-469d-af23-45c3833eb9d7-host\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.672656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d6d6e52c-91fd-469d-af23-45c3833eb9d7-serviceca\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.678757 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.678825 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.678842 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"bbb276affc00b142b7cff5d73b3a4bf0964449d8631a6604db746618f2ad2b8a"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.681332 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerStarted","Data":"a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.681399 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerStarted","Data":"5b3693ce0ab2ec138718cfda93f658ab119c1c91c510ed19054c1f02be27c19d"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.683113 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" exitCode=0 Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.683196 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.683215 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"456e71751215fd65a71e44af5d6eaa47482e203d27ef9a98426569053bf5c932"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.686668 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerStarted","Data":"c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.686729 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerStarted","Data":"7be72d76b62c4af9b7bda5b9a6b4fcc6990a19c80b77f130d48394d05c9b3f43"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.694353 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xpk8b" event={"ID":"af519c74-92e1-4b1e-84a9-148aa5d0aa2e","Type":"ContainerStarted","Data":"54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.694426 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xpk8b" event={"ID":"af519c74-92e1-4b1e-84a9-148aa5d0aa2e","Type":"ContainerStarted","Data":"643bf1cb18ee033457d8baf8e867967e2f9bf6a66ad25110f84d435eb0841873"} Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.714246 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.719975 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8lj\" (UniqueName: \"kubernetes.io/projected/d6d6e52c-91fd-469d-af23-45c3833eb9d7-kube-api-access-dp8lj\") pod \"node-ca-hktl5\" (UID: \"d6d6e52c-91fd-469d-af23-45c3833eb9d7\") " pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: E0121 10:06:24.739261 4684 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.761794 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.778030 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hktl5" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.787553 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.814824 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.835278 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.862753 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.887698 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.907890 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.920393 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.940500 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.960553 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:24 crc kubenswrapper[4684]: I0121 10:06:24.990716 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.007757 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.069331 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.102494 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.123097 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.140605 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.153453 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.167586 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.184601 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.199170 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.218417 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.249611 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.263240 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.343466 4684 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.345650 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.345710 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.345722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.345862 4684 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.355577 4684 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.355667 4684 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.357165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.357186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.357215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.357229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.357240 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.380328 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.384768 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.384816 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.384828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.384851 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.384868 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.400341 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.403743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.403765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.403775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.403792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.403805 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.415283 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.427582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.428010 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.428023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.428042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.428059 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.443686 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.447656 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.447691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.447702 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.447720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.447731 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.460955 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.461123 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.463059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.463101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.463112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.463131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.463144 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.477386 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:09:19.42200513 +0000 UTC Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.513948 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.513986 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.514093 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.514208 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.514321 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.514557 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.565899 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.565942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.565953 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.565970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.565980 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.581452 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.581629 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.581677 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:06:29.58164522 +0000 UTC m=+27.339728177 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.581745 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.581773 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.581812 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.581848 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:29.581825906 +0000 UTC m=+27.339908893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.581869 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.581898 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.581984 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:29.5819623 +0000 UTC m=+27.340045267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582032 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582047 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582060 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582096 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:29.582089114 +0000 UTC m=+27.340172081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582169 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582225 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582244 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:25 crc kubenswrapper[4684]: E0121 10:06:25.582328 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:29.582306041 +0000 UTC m=+27.340389018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.676418 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.676475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.676495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.676518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.676535 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.708737 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.708783 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.708805 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.708815 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.710401 4684 generic.go:334] "Generic (PLEG): container finished" podID="c2704b9e-474a-466a-b78c-d136a2f95a3b" containerID="c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7" exitCode=0 Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.710480 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerDied","Data":"c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.716309 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hktl5" event={"ID":"d6d6e52c-91fd-469d-af23-45c3833eb9d7","Type":"ContainerStarted","Data":"637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.716386 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hktl5" event={"ID":"d6d6e52c-91fd-469d-af23-45c3833eb9d7","Type":"ContainerStarted","Data":"78e9d036d407cbb4ffc5faedba08c722db341b77356a492a00ae5406dee40016"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.728303 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.747927 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.762035 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.776415 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.781683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.781722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.781735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.781754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.781766 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.789714 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.804695 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.818260 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.838139 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.852315 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.867344 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.882863 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.885471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.885520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.885534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.885556 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.885570 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.894782 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.910592 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.925003 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.952196 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.971727 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.989348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.989444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.989468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.989491 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.989505 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:25Z","lastTransitionTime":"2026-01-21T10:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:25 crc kubenswrapper[4684]: I0121 10:06:25.994588 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.060257 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.076534 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.092564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.092609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.092621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.092637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.092647 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.115147 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.142543 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.180190 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.196872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.196925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.196940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.196962 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.196984 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.204114 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.217672 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.233707 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.248206 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.263668 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.277393 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.289120 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.299540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.299582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.299594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.299612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.299623 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.309017 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.402254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.402300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.402312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.402329 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.402342 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.478209 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:50:03.983231908 +0000 UTC Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.505736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.505786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.505828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.505856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.505872 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.609246 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.609297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.609310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.609329 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.609342 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.712320 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.712404 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.712430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.712468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.712508 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.723444 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.723493 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.725067 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.727068 4684 generic.go:334] "Generic (PLEG): container finished" podID="c2704b9e-474a-466a-b78c-d136a2f95a3b" containerID="ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e" exitCode=0 Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.727106 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerDied","Data":"ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.740000 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.771386 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.790829 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.808659 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.815119 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.815168 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.815179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.815199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.815212 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.822308 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.836831 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.854474 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.872624 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.891763 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.912683 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.918869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.918905 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.918916 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.918936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.918947 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:26Z","lastTransitionTime":"2026-01-21T10:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.926756 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.945321 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.958071 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.974080 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:26 crc kubenswrapper[4684]: I0121 10:06:26.988823 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.005023 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.021612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.021735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.021752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.021774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.021790 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.022967 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.039911 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.057916 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.073775 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.086132 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.097403 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.109149 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.124923 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.124968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.124982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.125024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.125039 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.127346 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.141591 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.154082 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.167601 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.182092 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.204225 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.219436 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.227672 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.227795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.227892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.227970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.228032 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.332064 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.332512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.332524 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.332543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.332555 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.435893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.435958 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.435973 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.435995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.436014 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.478953 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 15:10:04.408222237 +0000 UTC Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.513802 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.513813 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:27 crc kubenswrapper[4684]: E0121 10:06:27.514072 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.513775 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:27 crc kubenswrapper[4684]: E0121 10:06:27.514203 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:27 crc kubenswrapper[4684]: E0121 10:06:27.514427 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.538792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.538862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.538884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.538913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.538936 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.642043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.642123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.642136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.642158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.642172 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.734467 4684 generic.go:334] "Generic (PLEG): container finished" podID="c2704b9e-474a-466a-b78c-d136a2f95a3b" containerID="218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f" exitCode=0 Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.734532 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerDied","Data":"218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.745441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.745487 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.745498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.745518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.745531 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.762072 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.780412 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.796553 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.811586 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.827734 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.845223 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.848561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.848619 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.848632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.848659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.848673 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.863723 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.878455 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.892429 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.903604 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.928154 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.942619 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.951657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.951704 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.951716 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.951735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.951746 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:27Z","lastTransitionTime":"2026-01-21T10:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.957729 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.971116 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:27 crc kubenswrapper[4684]: I0121 10:06:27.982637 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.054162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.054201 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.054210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.054230 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.054249 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.157107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.157145 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.157157 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.157176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.157187 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.259655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.259696 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.259708 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.259726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.259740 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.363264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.363311 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.363323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.363342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.363354 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.467310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.467352 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.467378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.467396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.467410 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.480088 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:58:11.202053058 +0000 UTC Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.571214 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.571279 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.571295 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.571314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.571328 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.673809 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.674140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.674415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.674599 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.674757 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.741403 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.743927 4684 generic.go:334] "Generic (PLEG): container finished" podID="c2704b9e-474a-466a-b78c-d136a2f95a3b" containerID="3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28" exitCode=0 Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.744009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerDied","Data":"3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.760606 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.777476 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.777549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.777576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.777610 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.777656 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.778470 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.802881 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.820382 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.836259 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.850319 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.865192 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.880001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.880063 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.880085 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.880107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.880121 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.888229 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.904949 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.918210 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.933776 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.946075 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.959729 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.981211 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.983114 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.983173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.983189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.983214 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.983227 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:28Z","lastTransitionTime":"2026-01-21T10:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:28 crc kubenswrapper[4684]: I0121 10:06:28.994752 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.082894 4684 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.086094 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.086149 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.086165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.086182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.086196 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.189064 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.189156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.189178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.189207 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.189224 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.291531 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.291566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.291577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.291593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.291603 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.399673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.399720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.399733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.399752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.399762 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.480408 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:58:30.102570221 +0000 UTC Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.502321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.502388 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.502401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.502420 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.502431 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.513955 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.514004 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.513963 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.514092 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.514170 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.514283 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.604847 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.604887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.604897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.604913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.604926 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.623278 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.623436 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.623465 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623512 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:06:37.623485418 +0000 UTC m=+35.381568385 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.623579 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.623635 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623585 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623695 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623694 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623748 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:37.623726196 +0000 UTC m=+35.381809333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623755 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623761 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623768 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623771 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:37.623762037 +0000 UTC m=+35.381845224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623776 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623775 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623804 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:37.623798218 +0000 UTC m=+35.381881185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:29 crc kubenswrapper[4684]: E0121 10:06:29.623866 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:37.62385555 +0000 UTC m=+35.381938507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.707702 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.707738 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.707748 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.707765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.707776 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.750130 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerStarted","Data":"473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.765521 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.782405 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.796746 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.808698 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.810813 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.810868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.810892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.810915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.810928 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.830680 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.844674 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.857892 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.876463 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.893198 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.906491 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.912871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.912909 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.912920 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.912937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.912950 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:29Z","lastTransitionTime":"2026-01-21T10:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.920107 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.931433 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.941500 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.958387 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:29 crc kubenswrapper[4684]: I0121 10:06:29.971232 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.014998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.015060 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.015072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.015093 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.015104 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.118135 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.118173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.118182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.118197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.118206 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.220716 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.220774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.220789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.220812 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.220828 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.324037 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.324460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.324473 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.324492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.324504 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.429447 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.429522 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.429534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.429554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.429566 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.481431 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:02:53.968188588 +0000 UTC Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.532293 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.532349 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.532387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.532414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.532430 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.635490 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.635652 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.635673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.635880 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.635934 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.739862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.739928 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.739950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.739976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.739994 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.762059 4684 generic.go:334] "Generic (PLEG): container finished" podID="c2704b9e-474a-466a-b78c-d136a2f95a3b" containerID="473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a" exitCode=0 Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.762146 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerDied","Data":"473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.770918 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.771423 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.781775 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.801655 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.820011 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.839346 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.842566 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.842598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.842609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.842625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.842635 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.845996 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.858915 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.873951 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.887272 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.905640 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.922177 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.938066 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.945548 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.945589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.945600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.945619 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.945634 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:30Z","lastTransitionTime":"2026-01-21T10:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.953646 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.968489 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:30 crc kubenswrapper[4684]: I0121 10:06:30.982565 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.033640 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.048416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.048457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.048469 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.048487 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.048502 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.059673 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.081964 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.102566 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.116794 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.130054 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.151915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.151972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.151984 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.152002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.152015 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.155829 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.166434 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.180420 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.195120 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.207517 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.219031 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.237190 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.251410 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.254221 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.254251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.254262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.254280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.254291 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.264851 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.280031 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.292672 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.387543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.387600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.387615 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.387638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.387653 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.482377 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:53:33.98590929 +0000 UTC Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.490394 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.490435 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.490445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.490461 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.490471 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.513777 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.513817 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.513888 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:31 crc kubenswrapper[4684]: E0121 10:06:31.514228 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:31 crc kubenswrapper[4684]: E0121 10:06:31.514304 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:31 crc kubenswrapper[4684]: E0121 10:06:31.514203 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.593051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.593105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.593117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.593137 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.593150 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.695996 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.696032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.696042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.696059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.696070 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.778016 4684 generic.go:334] "Generic (PLEG): container finished" podID="c2704b9e-474a-466a-b78c-d136a2f95a3b" containerID="51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c" exitCode=0 Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.778116 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerDied","Data":"51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.778621 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.779233 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.798320 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.798384 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.798399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.798414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.798425 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.800724 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.803313 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.812091 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.823834 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.839233 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.856323 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.872066 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.886452 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.901182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.901225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.901236 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.901823 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.901920 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:31Z","lastTransitionTime":"2026-01-21T10:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.902951 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.914666 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.934661 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.950981 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.964410 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.988163 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:31 crc kubenswrapper[4684]: I0121 10:06:31.997723 4684 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.001243 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.006819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.006874 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.006887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.006905 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.006916 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.016491 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.034545 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.046643 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.062709 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.076803 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.092941 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.107141 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.111506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.111534 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.111543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.111559 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.111569 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.122500 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.136178 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.148587 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.169395 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.201472 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.213521 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.213567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.213581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.213602 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.213616 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.225703 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.240104 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.250886 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.263833 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.318570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.318625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.318637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.318655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.318667 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.420786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.420827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.420838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.420855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.420864 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.483599 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:26:46.96182336 +0000 UTC Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.523609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.523978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.524065 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.524151 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.524224 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.540456 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.551138 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.569461 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.587236 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.604877 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.626829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.626887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.626900 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.626919 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.626936 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.626909 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.646288 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.665119 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.682903 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.696678 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.712175 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.726977 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.729598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.729626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.729635 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.729651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.729661 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.742961 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.762087 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.775596 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.788411 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" event={"ID":"c2704b9e-474a-466a-b78c-d136a2f95a3b","Type":"ContainerStarted","Data":"35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.805422 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.818826 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.834543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.835031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.835195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.835300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.835404 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.836157 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.852246 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.877002 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.894979 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.910620 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.928965 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.938316 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.938375 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.938390 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.938408 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.938420 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:32Z","lastTransitionTime":"2026-01-21T10:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.945656 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.967871 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.982499 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:32 crc kubenswrapper[4684]: I0121 10:06:32.996642 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.014401 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.027467 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.041068 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.041115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.041127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.041150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.041166 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.054254 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.144459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.144505 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.144517 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.144536 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.144548 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.246918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.246954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.246966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.246983 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.246995 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.349282 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.349324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.349336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.349354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.349380 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.451269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.451306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.451316 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.451333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.451344 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.484197 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:34:18.696034342 +0000 UTC Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.513840 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.513893 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.513983 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:33 crc kubenswrapper[4684]: E0121 10:06:33.514002 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:33 crc kubenswrapper[4684]: E0121 10:06:33.514122 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:33 crc kubenswrapper[4684]: E0121 10:06:33.514249 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.554321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.554354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.554381 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.554397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.554408 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.657011 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.657048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.657063 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.657083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.657097 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.759547 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.759894 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.759993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.760139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.760237 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.793584 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/0.log" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.796828 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045" exitCode=1 Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.796885 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.798204 4684 scope.go:117] "RemoveContainer" containerID="045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.821299 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.840319 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.855899 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.862941 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.862981 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.862993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.863015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.863029 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.869955 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.881909 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.894150 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.903600 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.924170 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:33Z\\\",\\\"message\\\":\\\"rs/externalversions/factory.go:140\\\\nI0121 10:06:33.326844 5930 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:06:33.326859 5930 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:06:33.326881 5930 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:06:33.326893 5930 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:06:33.326898 5930 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 10:06:33.326917 5930 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:06:33.326934 5930 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:33.326979 5930 factory.go:656] Stopping watch factory\\\\nI0121 10:06:33.327004 5930 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:06:33.327012 5930 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:06:33.327018 5930 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:06:33.327024 5930 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:06:33.327030 5930 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:06:33.327037 5930 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.940376 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.957113 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.968161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.968217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.968232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.968257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.968271 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:33Z","lastTransitionTime":"2026-01-21T10:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.970563 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:33 crc kubenswrapper[4684]: I0121 10:06:33.986486 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.004210 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.017148 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.046415 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.071741 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.072042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.072108 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.072192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.072309 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.175722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.175778 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.175796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.175823 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.175877 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.278288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.278355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.278412 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.278437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.278454 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.380739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.381026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.381134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.381212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.381277 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.484379 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:44:30.848294753 +0000 UTC Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.485556 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.485662 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.485756 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.485825 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.485882 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.588470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.588509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.588520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.588537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.588549 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.691400 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.691674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.691755 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.691835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.691936 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.794942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.795338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.795352 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.795391 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.795404 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.807274 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/0.log" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.810671 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.811499 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.824631 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.847470 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.867450 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.885265 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.897811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.897890 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.897916 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.897949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.897973 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:34Z","lastTransitionTime":"2026-01-21T10:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.906525 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.923998 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.938186 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.953912 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.969040 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:34 crc kubenswrapper[4684]: I0121 10:06:34.988968 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:33Z\\\",\\\"message\\\":\\\"rs/externalversions/factory.go:140\\\\nI0121 10:06:33.326844 5930 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:06:33.326859 5930 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:06:33.326881 5930 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:06:33.326893 5930 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:06:33.326898 5930 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 10:06:33.326917 5930 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:06:33.326934 5930 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:33.326979 5930 factory.go:656] Stopping watch factory\\\\nI0121 10:06:33.327004 5930 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:06:33.327012 5930 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:06:33.327018 5930 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:06:33.327024 5930 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:06:33.327030 5930 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:06:33.327037 5930 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.001724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.001769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.001780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.001800 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.001810 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.004779 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.017985 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.030053 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.044574 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.057839 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.104552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.104625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.104650 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.104686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.104708 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.207658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.207705 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.207717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.207734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.207745 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.310735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.310775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.310789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.310811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.310824 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.413285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.413326 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.413338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.413374 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.413386 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.485438 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:59:43.984156227 +0000 UTC Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.513544 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.514013 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.513937 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.514255 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.514478 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.514582 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.515734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.515761 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.515771 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.515788 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.515798 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.547793 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.547838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.547850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.547868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.547879 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.559295 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.563683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.563881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.563970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.564134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.564261 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.576063 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.580449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.580498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.580509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.580530 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.580542 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.592579 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.597284 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.597323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.597333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.597351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.597375 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.608844 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.611887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.611937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.611952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.611974 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.611986 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.622861 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.623003 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.624738 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.624887 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.624902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.624918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.624930 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.727561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.727804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.727863 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.727950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.728020 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.816058 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/1.log" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.816822 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/0.log" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.820485 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54" exitCode=1 Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.820547 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.820626 4684 scope.go:117] "RemoveContainer" containerID="045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.821776 4684 scope.go:117] "RemoveContainer" containerID="1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54" Jan 21 10:06:35 crc kubenswrapper[4684]: E0121 10:06:35.822067 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.830257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.830318 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.830337 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.830389 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.830412 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.854465 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.871738 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.872622 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt"] Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.873151 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.874961 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.876170 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.905766 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.934429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.934479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.934493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.934514 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.934527 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:35Z","lastTransitionTime":"2026-01-21T10:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.948010 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.969745 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:35 crc kubenswrapper[4684]: I0121 10:06:35.989766 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.007118 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/feda3c91-51cd-4b8e-becd-177b669f0ee1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.007189 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/feda3c91-51cd-4b8e-becd-177b669f0ee1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.007210 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/feda3c91-51cd-4b8e-becd-177b669f0ee1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.007434 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75dd\" (UniqueName: \"kubernetes.io/projected/feda3c91-51cd-4b8e-becd-177b669f0ee1-kube-api-access-v75dd\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.007864 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.024925 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.037111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.037167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.037196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.037221 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.037235 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.038920 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.051288 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.072198 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:33Z\\\",\\\"message\\\":\\\"rs/externalversions/factory.go:140\\\\nI0121 10:06:33.326844 5930 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:06:33.326859 5930 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:06:33.326881 5930 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:06:33.326893 5930 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:06:33.326898 5930 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 10:06:33.326917 5930 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:06:33.326934 5930 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:33.326979 5930 factory.go:656] Stopping watch factory\\\\nI0121 10:06:33.327004 5930 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:06:33.327012 5930 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:06:33.327018 5930 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:06:33.327024 5930 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:06:33.327030 5930 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:06:33.327037 5930 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.085796 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.099630 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.108986 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/feda3c91-51cd-4b8e-becd-177b669f0ee1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.109023 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/feda3c91-51cd-4b8e-becd-177b669f0ee1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.109062 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v75dd\" (UniqueName: \"kubernetes.io/projected/feda3c91-51cd-4b8e-becd-177b669f0ee1-kube-api-access-v75dd\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.109103 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/feda3c91-51cd-4b8e-becd-177b669f0ee1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.109773 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/feda3c91-51cd-4b8e-becd-177b669f0ee1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.109898 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/feda3c91-51cd-4b8e-becd-177b669f0ee1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.112320 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.116633 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/feda3c91-51cd-4b8e-becd-177b669f0ee1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.128113 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75dd\" (UniqueName: \"kubernetes.io/projected/feda3c91-51cd-4b8e-becd-177b669f0ee1-kube-api-access-v75dd\") pod \"ovnkube-control-plane-749d76644c-xzdqt\" (UID: \"feda3c91-51cd-4b8e-becd-177b669f0ee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.129426 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.139494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.139545 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.139557 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.139576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.139588 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.160507 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.176015 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.194803 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.209190 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.215249 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: W0121 10:06:36.224126 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeda3c91_51cd_4b8e_becd_177b669f0ee1.slice/crio-7e80d6e5fdc2446987c88c337ba3b44742dae4d85f6015680446c01cda955483 WatchSource:0}: Error finding container 7e80d6e5fdc2446987c88c337ba3b44742dae4d85f6015680446c01cda955483: Status 404 returned error can't find the container with id 7e80d6e5fdc2446987c88c337ba3b44742dae4d85f6015680446c01cda955483 Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.232140 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.242114 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.242146 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.242158 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.242176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.242186 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.258459 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:33Z\\\",\\\"message\\\":\\\"rs/externalversions/factory.go:140\\\\nI0121 10:06:33.326844 5930 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:06:33.326859 5930 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:06:33.326881 5930 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:06:33.326893 5930 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:06:33.326898 5930 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 10:06:33.326917 5930 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:06:33.326934 5930 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:33.326979 5930 factory.go:656] Stopping watch factory\\\\nI0121 10:06:33.327004 5930 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:06:33.327012 5930 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:06:33.327018 5930 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:06:33.327024 5930 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:06:33.327030 5930 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:06:33.327037 5930 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.275047 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.291717 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.310711 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.326486 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.341984 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.344978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.345018 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.345031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.345052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.345065 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.357632 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.376398 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.392266 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.407242 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.422922 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.447052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.447098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.447112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.447132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.447146 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.485886 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:19:51.322822701 +0000 UTC Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.550010 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.550239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.550299 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.550377 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.550436 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.629983 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7wzh7"] Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.630552 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:36 crc kubenswrapper[4684]: E0121 10:06:36.630624 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.653307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.653568 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.653665 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.653766 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.653829 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.662852 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.673402 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.685540 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.699424 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.709545 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.714498 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58x9s\" (UniqueName: \"kubernetes.io/projected/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-kube-api-access-58x9s\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.714652 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.720466 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.730935 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.741106 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.756779 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.756816 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.756826 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.756841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.756850 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.759168 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://045cc447816583a6e70ffc588d1b4fdc0b682635f39ef089686211498f596045\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:33Z\\\",\\\"message\\\":\\\"rs/externalversions/factory.go:140\\\\nI0121 10:06:33.326844 5930 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:06:33.326859 5930 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:06:33.326881 5930 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:06:33.326893 5930 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:06:33.326898 5930 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0121 10:06:33.326917 5930 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:06:33.326934 5930 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:33.326979 5930 factory.go:656] Stopping watch factory\\\\nI0121 10:06:33.327004 5930 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:06:33.327012 5930 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:06:33.327018 5930 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:06:33.327024 5930 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:06:33.327030 5930 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:06:33.327037 5930 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.774982 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.789398 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.802161 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.813470 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.816107 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58x9s\" (UniqueName: \"kubernetes.io/projected/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-kube-api-access-58x9s\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.816195 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:36 crc kubenswrapper[4684]: E0121 10:06:36.816293 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:36 crc kubenswrapper[4684]: E0121 10:06:36.816356 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:37.316336192 +0000 UTC m=+35.074419159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.825091 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/1.log" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.827824 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.828493 4684 scope.go:117] "RemoveContainer" containerID="1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54" Jan 21 10:06:36 crc kubenswrapper[4684]: E0121 10:06:36.828694 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.829878 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" event={"ID":"feda3c91-51cd-4b8e-becd-177b669f0ee1","Type":"ContainerStarted","Data":"8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.829925 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" event={"ID":"feda3c91-51cd-4b8e-becd-177b669f0ee1","Type":"ContainerStarted","Data":"5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.829938 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" event={"ID":"feda3c91-51cd-4b8e-becd-177b669f0ee1","Type":"ContainerStarted","Data":"7e80d6e5fdc2446987c88c337ba3b44742dae4d85f6015680446c01cda955483"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.838103 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58x9s\" (UniqueName: \"kubernetes.io/projected/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-kube-api-access-58x9s\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.839749 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.855453 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.858834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.858879 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.858892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.858912 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.858923 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.870020 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.888653 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.901433 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.914587 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.927711 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.945994 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.956968 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.961250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.961295 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.961308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.961328 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.961340 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:36Z","lastTransitionTime":"2026-01-21T10:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.968518 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.981939 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:36 crc kubenswrapper[4684]: I0121 10:06:36.995206 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.009532 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.026555 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.041532 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.063303 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.064468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.064514 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.064532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.064552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.064565 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.077066 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.092174 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.109388 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.123009 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.166921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.166956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.166969 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.166985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.166995 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.269676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.269733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.269750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.269776 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.269796 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.321111 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.321573 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.321785 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:38.321761432 +0000 UTC m=+36.079844399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.372406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.372754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.372895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.373020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.373134 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.476837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.477165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.477280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.477428 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.477552 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.486286 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:07:26.760073435 +0000 UTC Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.514043 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.514048 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.514207 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.514438 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.514618 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.514825 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.580763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.581017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.581107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.581186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.581245 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.624586 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.624799 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:06:53.624759312 +0000 UTC m=+51.382842319 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.625038 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.625128 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.625259 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.625337 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625292 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625573 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625601 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625681 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:53.62565841 +0000 UTC m=+51.383741417 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625413 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625789 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625867 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625927 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625799 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:53.625770933 +0000 UTC m=+51.383854040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.625488 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.626130 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:53.626060652 +0000 UTC m=+51.384143649 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:37 crc kubenswrapper[4684]: E0121 10:06:37.626200 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:53.626150815 +0000 UTC m=+51.384234022 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.684171 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.684217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.684229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.684247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.684258 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.786714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.786747 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.786758 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.786774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.786784 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.890116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.890487 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.890700 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.890845 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.891042 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.995140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.995480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.995626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.995940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:37 crc kubenswrapper[4684]: I0121 10:06:37.996221 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:37Z","lastTransitionTime":"2026-01-21T10:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.099599 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.099982 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.100128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.100272 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.100490 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.203610 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.203651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.203662 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.203679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.203693 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.306692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.306749 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.306763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.306784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.306804 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.334490 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:38 crc kubenswrapper[4684]: E0121 10:06:38.334690 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:38 crc kubenswrapper[4684]: E0121 10:06:38.334804 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:40.334777389 +0000 UTC m=+38.092860516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.410235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.410841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.410865 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.410888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.410905 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.486652 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:16:45.107404063 +0000 UTC Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.513703 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:38 crc kubenswrapper[4684]: E0121 10:06:38.513856 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.513991 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.514025 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.514034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.514051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.514062 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.617986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.618038 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.618057 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.618078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.618092 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.841132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.841247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.841275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.841310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.841336 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.944479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.944554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.944573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.944601 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:38 crc kubenswrapper[4684]: I0121 10:06:38.944619 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:38Z","lastTransitionTime":"2026-01-21T10:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.054803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.054877 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.054897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.054927 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.054950 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.157594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.157673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.157693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.157718 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.157736 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.261553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.261624 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.261647 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.261679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.261702 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.365814 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.366126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.366253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.366410 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.366530 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.469325 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.469431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.469448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.469526 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.469545 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.487805 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:08:14.525029028 +0000 UTC Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.514406 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.514458 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:39 crc kubenswrapper[4684]: E0121 10:06:39.514559 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.514458 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:39 crc kubenswrapper[4684]: E0121 10:06:39.514637 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:39 crc kubenswrapper[4684]: E0121 10:06:39.514658 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.572543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.572581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.572591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.572608 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.572619 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.674861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.674910 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.674925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.674944 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.674959 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.769867 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.777712 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.777769 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.777791 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.777819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.777856 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.787151 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.812631 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.829295 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.848658 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.863961 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.881208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.881283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.881308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.881342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.881397 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.881551 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.899852 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.913193 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.924329 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.945897 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.962727 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.975555 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.984144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.984194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.984207 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.984227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.984240 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:39Z","lastTransitionTime":"2026-01-21T10:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:39 crc kubenswrapper[4684]: I0121 10:06:39.990965 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.004453 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.017136 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.028969 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.043830 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:40Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.087266 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.087321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.087338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.087379 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.087399 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.189467 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.189530 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.189546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.189565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.189606 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.292468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.292511 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.292529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.292547 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.292558 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.358494 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:40 crc kubenswrapper[4684]: E0121 10:06:40.358703 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:40 crc kubenswrapper[4684]: E0121 10:06:40.358815 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:44.358793039 +0000 UTC m=+42.116876006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.395560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.395589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.395599 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.395614 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.395624 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.488748 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:07:01.821589858 +0000 UTC Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.498623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.498786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.499073 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.499105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.499122 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.515122 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:40 crc kubenswrapper[4684]: E0121 10:06:40.515316 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.602523 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.602572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.602585 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.602605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.602617 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.705631 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.705674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.705686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.705707 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.705742 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.808655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.808717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.808734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.808757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.808774 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.911397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.911453 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.911472 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.911495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:40 crc kubenswrapper[4684]: I0121 10:06:40.911510 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:40Z","lastTransitionTime":"2026-01-21T10:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.013871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.013922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.013935 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.013963 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.013976 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.117001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.117036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.117048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.117066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.117078 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.220151 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.220207 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.220225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.220250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.220269 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.324227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.324288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.324306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.324333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.324354 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.427406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.427475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.427493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.427520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.427537 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.489715 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:47:10.738466256 +0000 UTC Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.514047 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.514131 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.514178 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:41 crc kubenswrapper[4684]: E0121 10:06:41.514256 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:41 crc kubenswrapper[4684]: E0121 10:06:41.514337 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:41 crc kubenswrapper[4684]: E0121 10:06:41.514486 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.531576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.531630 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.531644 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.531663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.531676 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.634200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.634265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.634283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.634310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.634331 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.737495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.737551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.737565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.737585 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.737603 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.840096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.840149 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.840161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.840186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.840203 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.943140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.943189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.943202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.943223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:41 crc kubenswrapper[4684]: I0121 10:06:41.943235 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:41Z","lastTransitionTime":"2026-01-21T10:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.045597 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.045629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.045637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.045653 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.045662 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.148455 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.148490 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.148501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.148517 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.148531 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.252406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.252457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.252471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.252492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.252507 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.354699 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.354752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.354765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.354784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.354796 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.457754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.457817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.457827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.457846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.457857 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.490429 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:40:33.48089561 +0000 UTC Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.514046 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:42 crc kubenswrapper[4684]: E0121 10:06:42.514263 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.542674 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.557314 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.560070 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.560118 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.560132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.560152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.560166 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.584770 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.605481 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.621442 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.642096 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.658353 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.663220 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.663285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.663305 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.663336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.663354 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.689335 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.711507 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.732793 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.752527 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.766140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.766183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.766194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.766213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.766223 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.768094 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.782160 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.794594 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.806601 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.822065 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.838103 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:42Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.869174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.869211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.869222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.869237 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.869247 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.972247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.972290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.972305 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.972323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:42 crc kubenswrapper[4684]: I0121 10:06:42.972334 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:42Z","lastTransitionTime":"2026-01-21T10:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.074929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.075155 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.075355 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.075446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.075556 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.178297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.178408 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.178450 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.178478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.178495 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.281462 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.281602 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.281625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.281657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.281678 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.384565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.384628 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.384655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.384701 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.384723 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.487596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.487682 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.487706 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.487739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.487766 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.490705 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:29:21.037798088 +0000 UTC Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.514482 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:43 crc kubenswrapper[4684]: E0121 10:06:43.514944 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.514636 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:43 crc kubenswrapper[4684]: E0121 10:06:43.515217 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.514513 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:43 crc kubenswrapper[4684]: E0121 10:06:43.515515 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.591122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.591191 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.591212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.591242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.591260 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.694869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.694930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.694949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.694975 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.694995 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.798202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.798264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.798284 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.798312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.798332 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.906511 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.906572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.907287 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.907319 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:43 crc kubenswrapper[4684]: I0121 10:06:43.907338 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:43Z","lastTransitionTime":"2026-01-21T10:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.010281 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.010582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.010752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.010841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.010912 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.113936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.113997 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.114012 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.114035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.114051 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.216700 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.216734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.216744 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.216756 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.216763 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.319905 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.319956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.319969 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.319987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.319999 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.404230 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:44 crc kubenswrapper[4684]: E0121 10:06:44.404449 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:44 crc kubenswrapper[4684]: E0121 10:06:44.404531 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:06:52.404507488 +0000 UTC m=+50.162590455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.423115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.423175 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.423189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.423224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.423238 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.491936 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:51:19.679366039 +0000 UTC Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.513772 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:44 crc kubenswrapper[4684]: E0121 10:06:44.513919 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.525830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.525875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.525913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.525937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.525951 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.628577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.628618 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.628627 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.628647 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.628658 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.731734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.731802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.731817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.731838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.731851 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.834601 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.834638 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.834651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.834668 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.834680 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.936884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.936939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.936954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.936976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:44 crc kubenswrapper[4684]: I0121 10:06:44.936993 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:44Z","lastTransitionTime":"2026-01-21T10:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.040960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.041000 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.041012 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.041030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.041041 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.143546 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.143596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.143613 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.143632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.143645 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.246641 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.246692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.246702 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.246718 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.246728 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.349573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.349618 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.349632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.349652 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.349665 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.452027 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.452080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.452095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.452113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.452125 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.492535 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:33:16.064213306 +0000 UTC Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.514036 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.514070 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.514034 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.514202 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.514277 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.514405 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.554211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.554265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.554284 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.554314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.554343 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.657036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.657071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.657083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.657102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.657112 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.759690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.759733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.759745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.759763 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.759777 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.862170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.862208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.862219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.862233 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.862245 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.902127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.902179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.902193 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.902306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.902323 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.914995 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.920144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.920176 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.920187 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.920203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.920213 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.935154 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.939463 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.939494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.939503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.939519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.939529 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.953225 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.958594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.958683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.958709 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.958739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.958764 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:45 crc kubenswrapper[4684]: E0121 10:06:45.979872 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.986660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.986745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.986764 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.986807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:45 crc kubenswrapper[4684]: I0121 10:06:45.986825 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:45Z","lastTransitionTime":"2026-01-21T10:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: E0121 10:06:46.007009 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:46 crc kubenswrapper[4684]: E0121 10:06:46.007505 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.009503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.009579 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.009592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.009609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.009620 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.112972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.113028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.113041 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.113060 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.113072 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.216211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.216270 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.216283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.216303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.216319 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.318938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.319003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.319013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.319046 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.319057 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.421422 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.421487 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.421506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.421532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.421553 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.493692 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:02:17.497204687 +0000 UTC Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.514325 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:46 crc kubenswrapper[4684]: E0121 10:06:46.514498 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.523586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.523636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.523652 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.523674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.523690 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.626277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.626399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.626422 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.626449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.626471 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.729432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.729757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.729983 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.730095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.730186 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.832349 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.832401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.832411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.832429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.832440 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.934832 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.934872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.934884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.934901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:46 crc kubenswrapper[4684]: I0121 10:06:46.934911 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:46Z","lastTransitionTime":"2026-01-21T10:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.037034 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.037081 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.037092 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.037107 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.037117 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.139533 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.139650 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.139661 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.139681 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.139693 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.242084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.242126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.242136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.242173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.242185 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.346572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.346620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.346634 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.346662 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.346677 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.449159 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.449201 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.449212 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.449228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.449238 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.495536 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:14:50.1619014 +0000 UTC Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.513921 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.514024 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:47 crc kubenswrapper[4684]: E0121 10:06:47.514116 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.513933 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:47 crc kubenswrapper[4684]: E0121 10:06:47.514188 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:47 crc kubenswrapper[4684]: E0121 10:06:47.514254 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.552729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.552786 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.552801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.552822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.552836 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.655886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.655928 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.655943 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.655961 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.655972 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.759165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.759211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.759222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.759241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.759256 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.862105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.862170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.862196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.862229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.862253 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.964886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.964967 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.964991 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.965024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:47 crc kubenswrapper[4684]: I0121 10:06:47.965048 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:47Z","lastTransitionTime":"2026-01-21T10:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.068197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.068240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.068250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.068270 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.068281 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.170631 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.170681 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.170695 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.170722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.170734 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.273549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.273604 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.273620 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.273639 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.273650 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.376282 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.376321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.376335 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.376354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.376381 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.478896 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.478945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.478955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.478972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.478985 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.496449 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:51:10.772683291 +0000 UTC Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.513951 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:48 crc kubenswrapper[4684]: E0121 10:06:48.514143 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.581955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.582019 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.582029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.582051 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.582064 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.685327 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.685417 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.685432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.685455 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.685469 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.788520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.788950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.789078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.789201 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.789315 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.892734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.893245 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.893581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.893739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.893876 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.997292 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.997336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.997346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.997396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:48 crc kubenswrapper[4684]: I0121 10:06:48.997408 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:48Z","lastTransitionTime":"2026-01-21T10:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.100758 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.101156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.101236 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.101303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.101399 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.204420 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.204473 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.204493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.204520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.204531 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.307880 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.307930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.307940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.307957 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.307969 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.411556 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.411611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.411622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.411645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.411680 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.497122 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:28:12.963339231 +0000 UTC Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.513925 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514040 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514111 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:49 crc kubenswrapper[4684]: E0121 10:06:49.514313 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:49 crc kubenswrapper[4684]: E0121 10:06:49.514470 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: E0121 10:06:49.514659 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.514773 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.618382 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.618446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.618456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.618475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.618486 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.721858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.721906 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.721917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.721932 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.721941 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.824783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.824865 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.824878 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.824905 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.824919 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.927838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.927901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.927918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.927946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:49 crc kubenswrapper[4684]: I0121 10:06:49.927969 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:49Z","lastTransitionTime":"2026-01-21T10:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.030799 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.030871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.030885 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.030906 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.030919 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.134783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.134850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.134871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.134898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.134917 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.238439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.238512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.238532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.238561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.238579 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.341773 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.341811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.341838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.341856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.341866 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.445153 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.445229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.445238 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.445258 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.445268 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.498255 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:03:28.583254454 +0000 UTC Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.513806 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:50 crc kubenswrapper[4684]: E0121 10:06:50.514545 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.515102 4684 scope.go:117] "RemoveContainer" containerID="1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.548985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.549063 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.549084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.549117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.549141 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.652032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.652069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.652078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.652113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.652124 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.756727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.756767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.756780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.756802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.756820 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.860698 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.860771 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.860793 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.860830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.860854 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.884308 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/1.log" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.888036 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.888684 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.911177 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.929442 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.945279 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.961151 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.964964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.965064 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.965096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.965135 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.965159 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:50Z","lastTransitionTime":"2026-01-21T10:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.977799 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:50 crc kubenswrapper[4684]: I0121 10:06:50.992395 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.011754 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.035128 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.056624 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.072493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.072527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.072537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.072554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.072563 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.074677 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.092641 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.107481 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.121991 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.134857 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.150886 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.162785 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.174766 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.174813 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.174824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.174841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.174850 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.187465 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.278035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.278101 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.278121 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.278148 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.278168 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.380737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.380826 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.380841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.380866 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.380881 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.483679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.483724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.483734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.483754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.483766 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.499349 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:35:47.338646835 +0000 UTC Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.513790 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.513872 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.513912 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:51 crc kubenswrapper[4684]: E0121 10:06:51.513988 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:51 crc kubenswrapper[4684]: E0121 10:06:51.514081 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:51 crc kubenswrapper[4684]: E0121 10:06:51.514262 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.586247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.586323 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.586344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.586411 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.586438 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.690954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.690997 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.691010 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.691026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.691036 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.793701 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.793784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.793811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.793848 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.793874 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.894785 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/2.log" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.895471 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/1.log" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.895690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.895733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.895751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.895772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.895785 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:51Z","lastTransitionTime":"2026-01-21T10:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.898278 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf" exitCode=1 Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.898339 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf"} Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.898394 4684 scope.go:117] "RemoveContainer" containerID="1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.899103 4684 scope.go:117] "RemoveContainer" containerID="764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf" Jan 21 10:06:51 crc kubenswrapper[4684]: E0121 10:06:51.899280 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.923315 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.942157 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.957399 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.974984 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:51 crc kubenswrapper[4684]: I0121 10:06:51.993073 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.000425 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.000481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.000512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.000540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.000556 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.014329 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.037200 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.051052 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.069490 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.089121 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.102565 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.103443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.103486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.103500 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.103520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.103534 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.116024 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.127650 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.147261 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.163355 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.181049 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.198491 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.206182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.206241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.206254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.206277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.206301 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.309304 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.309343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.309352 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.309392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.309405 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.408392 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:52 crc kubenswrapper[4684]: E0121 10:06:52.408585 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:52 crc kubenswrapper[4684]: E0121 10:06:52.408657 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:07:08.408636262 +0000 UTC m=+66.166719229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.413862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.413892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.413902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.413923 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.413935 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.499857 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:50:27.545734848 +0000 UTC Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.513701 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:52 crc kubenswrapper[4684]: E0121 10:06:52.513898 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.519627 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.519681 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.519698 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.519724 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.519743 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.535945 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.553347 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.571768 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.588720 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.607718 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.623588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.623655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.623679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.623711 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.623733 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.628957 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.660132 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.673023 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.688801 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.708299 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.723135 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.726629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.726689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.726707 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.726733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.726747 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.740050 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.753781 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.779653 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e372d443f125d5783bc805f64bf97f915eef0adbf471c4272affdb31bbe1e54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"message\\\":\\\".963715 6101 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963746 6101 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:06:34.963771 6101 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963799 6101 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:06:34.963830 6101 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:06:34.966372 6101 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:06:34.966394 6101 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:06:34.966409 6101 factory.go:656] Stopping watch factory\\\\nI0121 10:06:34.966423 6101 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:06:34.966444 6101 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.799313 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.814148 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.826438 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.829408 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.829469 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.829478 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.829496 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.829506 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.905409 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/2.log" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.910415 4684 scope.go:117] "RemoveContainer" containerID="764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf" Jan 21 10:06:52 crc kubenswrapper[4684]: E0121 10:06:52.910721 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.926682 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.932667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.932713 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.932727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.932748 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.932765 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:52Z","lastTransitionTime":"2026-01-21T10:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.941281 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.958228 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.972626 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:52 crc kubenswrapper[4684]: I0121 10:06:52.987151 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.003303 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.026832 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.036170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.036220 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.036232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.036251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.036266 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.039462 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.053425 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.070183 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.088785 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.105016 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.119447 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.136035 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.139758 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.139794 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.139806 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.139824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.139837 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.150195 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.173614 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.192853 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.242807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.242855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.242867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.242884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.242896 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.345955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.346024 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.346039 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.346067 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.346082 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.449023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.449096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.449109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.449128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.449137 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.500723 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:25:31.853768345 +0000 UTC Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.514151 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.514352 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.514982 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.515112 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.515147 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.519174 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.552022 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.552461 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.553015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.553157 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.553299 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.656793 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.656835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.656848 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.656867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.656881 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.723695 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.723839 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.723879 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724026 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724049 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724063 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724155 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724024 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:07:25.723982567 +0000 UTC m=+83.482065534 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.724259 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.724314 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724465 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724506 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:07:25.724496034 +0000 UTC m=+83.482579001 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724556 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:07:25.724533225 +0000 UTC m=+83.482616362 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724581 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:07:25.724571206 +0000 UTC m=+83.482654383 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724663 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724695 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724710 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:53 crc kubenswrapper[4684]: E0121 10:06:53.724751 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:07:25.724740462 +0000 UTC m=+83.482823639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.759396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.759441 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.759452 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.759472 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.759484 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.862593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.862635 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.862643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.862661 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.862672 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.965756 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.965831 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.965850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.965873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:53 crc kubenswrapper[4684]: I0121 10:06:53.965887 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:53Z","lastTransitionTime":"2026-01-21T10:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.068504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.068540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.068551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.068570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.068582 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.171145 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.171207 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.171217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.171233 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.171242 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.273350 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.273405 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.273416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.273437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.273448 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.376504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.376549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.376562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.376581 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.376592 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.478954 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.479000 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.479013 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.479033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.479045 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.501176 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:44:24.187757136 +0000 UTC Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.514282 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:54 crc kubenswrapper[4684]: E0121 10:06:54.514568 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.521653 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.540304 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.545018 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.558986 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.574121 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.582235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.582275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.582290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.582310 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.582322 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.590532 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.606314 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.619475 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.632049 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.648480 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.664654 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.679420 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.684169 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.684200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.684213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.684230 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.684241 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.691878 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.704582 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.726989 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.738633 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.754003 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.770683 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.787144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.787213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.787229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.787251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.787289 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.788986 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:54Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.890185 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.890234 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.890247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.890266 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.890278 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.993555 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.993606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.993617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.993637 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:54 crc kubenswrapper[4684]: I0121 10:06:54.993647 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:54Z","lastTransitionTime":"2026-01-21T10:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.096765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.096819 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.096831 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.096847 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.096856 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.199974 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.200019 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.200029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.200048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.200060 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.302597 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.302633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.302643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.302658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.302669 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.405942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.405973 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.405983 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.405998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.406008 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.501467 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:27:35.217238555 +0000 UTC Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.509202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.509282 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.509308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.509400 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.509432 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.513975 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:55 crc kubenswrapper[4684]: E0121 10:06:55.514142 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.513975 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.513975 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:55 crc kubenswrapper[4684]: E0121 10:06:55.514451 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:55 crc kubenswrapper[4684]: E0121 10:06:55.514732 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.612513 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.612549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.612560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.612577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.612588 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.714988 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.715075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.715090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.715112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.715127 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.817640 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.817686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.817698 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.817716 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.817727 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.920591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.920652 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.920671 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.920696 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:55 crc kubenswrapper[4684]: I0121 10:06:55.920712 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:55Z","lastTransitionTime":"2026-01-21T10:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.023645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.023715 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.023729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.023751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.023765 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.126667 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.126706 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.126717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.126732 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.126744 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.229703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.229748 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.229757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.229776 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.229790 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.279429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.279492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.279507 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.279526 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.279536 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.294660 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.299848 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.299917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.299939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.299969 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.299991 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.318235 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.323220 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.323271 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.323283 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.323303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.323317 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.339732 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.344575 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.344616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.344626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.344642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.344655 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.357343 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.361527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.361562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.361573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.361592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.361605 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.374711 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:06:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.374852 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.376336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.376387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.376401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.376421 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.376433 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.478846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.478907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.478921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.478942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.478959 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.502465 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:12:39.887928585 +0000 UTC Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.513998 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:56 crc kubenswrapper[4684]: E0121 10:06:56.514217 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.581177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.581223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.581234 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.581251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.581261 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.685249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.685329 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.685344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.685410 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.685427 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.788668 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.788730 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.788742 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.788790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.788802 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.892429 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.892482 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.892497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.892518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.892535 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.995416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.995736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.995834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.995925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:56 crc kubenswrapper[4684]: I0121 10:06:56.996009 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:56Z","lastTransitionTime":"2026-01-21T10:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.099080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.099124 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.099134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.099150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.099162 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.201652 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.201717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.201737 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.201761 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.201771 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.304140 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.304199 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.304215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.304238 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.304250 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.407745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.407804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.407817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.407838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.407849 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.503683 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:51:44.070435228 +0000 UTC Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.511030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.511098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.511111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.511131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.511143 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.513918 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:57 crc kubenswrapper[4684]: E0121 10:06:57.514069 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.513949 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:57 crc kubenswrapper[4684]: E0121 10:06:57.514160 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.513926 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:57 crc kubenswrapper[4684]: E0121 10:06:57.514223 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.614105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.614145 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.614154 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.614172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.614181 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.716484 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.716563 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.716576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.716596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.716609 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.819132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.819186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.819203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.819228 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.819244 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.923090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.923182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.923259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.923452 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:57 crc kubenswrapper[4684]: I0121 10:06:57.923559 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:57Z","lastTransitionTime":"2026-01-21T10:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.026585 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.026719 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.026738 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.026765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.026782 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.130261 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.130317 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.130331 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.130347 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.130374 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.233436 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.233482 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.233492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.233509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.233519 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.336449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.336496 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.336508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.336527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.336539 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.439841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.439898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.439911 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.439931 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.439947 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.504443 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:03:48.139796426 +0000 UTC Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.513892 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:06:58 crc kubenswrapper[4684]: E0121 10:06:58.514093 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.548820 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.548876 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.548890 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.548910 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.548922 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.651778 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.651836 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.651846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.651864 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.651875 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.755451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.755490 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.755501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.755533 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.755549 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.857643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.857695 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.857729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.857749 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.857761 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.960924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.961002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.961028 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.961061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:58 crc kubenswrapper[4684]: I0121 10:06:58.961083 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:58Z","lastTransitionTime":"2026-01-21T10:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.064804 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.064871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.064895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.064942 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.064971 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.167781 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.167861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.167899 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.167938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.167965 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.271392 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.271454 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.271480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.271513 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.271535 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.374858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.374922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.374939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.374968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.375025 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.478066 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.478141 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.478196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.478229 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.478252 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.505574 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:11:06.468214956 +0000 UTC Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.513879 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.514009 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.514189 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:06:59 crc kubenswrapper[4684]: E0121 10:06:59.514170 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:06:59 crc kubenswrapper[4684]: E0121 10:06:59.514262 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:06:59 crc kubenswrapper[4684]: E0121 10:06:59.514391 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.581400 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.581469 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.581485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.581505 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.581519 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.683783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.683824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.683837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.683854 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.683865 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.786703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.786750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.786764 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.786788 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.786802 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.888745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.888791 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.888810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.888826 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.888837 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.991796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.991838 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.991849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.991867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:06:59 crc kubenswrapper[4684]: I0121 10:06:59.991877 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:06:59Z","lastTransitionTime":"2026-01-21T10:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.094783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.094862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.094888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.094924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.094950 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.200052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.200125 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.200138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.200160 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.200173 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.303686 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.303750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.303774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.303809 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.303832 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.406719 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.406781 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.406801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.406828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.406851 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.505705 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:51:27.073324554 +0000 UTC Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.510160 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.510193 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.510204 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.510239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.510251 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.513738 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:00 crc kubenswrapper[4684]: E0121 10:07:00.513952 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.615202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.615313 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.615433 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.615482 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.615698 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.719117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.719237 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.719264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.719294 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.719311 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.822796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.822895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.822921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.822959 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.822983 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.926049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.926172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.926189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.926217 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:00 crc kubenswrapper[4684]: I0121 10:07:00.926243 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:00Z","lastTransitionTime":"2026-01-21T10:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.028938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.029031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.029056 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.029092 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.029117 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.132395 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.132493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.132509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.132572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.132589 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.235165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.235210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.235223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.235242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.235254 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.338033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.338084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.338097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.338131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.338144 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.441575 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.441660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.441690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.441710 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.441720 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.506506 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 19:52:46.687872588 +0000 UTC Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.513819 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.513845 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:01 crc kubenswrapper[4684]: E0121 10:07:01.513985 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:01 crc kubenswrapper[4684]: E0121 10:07:01.514067 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.513866 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:01 crc kubenswrapper[4684]: E0121 10:07:01.514147 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.544494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.544555 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.544569 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.544589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.544606 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.647794 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.647857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.647874 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.647901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.647914 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.751560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.751622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.751636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.751687 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.751704 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.855222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.855263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.855277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.855294 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.855305 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.957339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.957406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.957446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.957470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:01 crc kubenswrapper[4684]: I0121 10:07:01.957485 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:01Z","lastTransitionTime":"2026-01-21T10:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.059930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.059991 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.060008 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.060030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.060046 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.162477 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.162520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.162532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.162549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.162562 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.265072 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.265128 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.265144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.265164 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.265181 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.368186 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.368249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.368263 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.368285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.368300 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.471020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.471456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.471480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.471503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.471518 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.506829 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:33:43.509080053 +0000 UTC Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.514299 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:02 crc kubenswrapper[4684]: E0121 10:07:02.514513 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.537590 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.553963 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.571353 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.575292 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.575338 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.575380 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.575406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.575424 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.587880 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.608451 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.624207 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.641560 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.663061 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.677838 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.678455 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.678486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.678496 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.678511 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.678521 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.692970 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.709065 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.723265 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.736009 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.748956 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.760511 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.781479 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.781538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.781552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.781573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.781592 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.782079 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.833912 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.860741 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:02Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.885569 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.885607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.885617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.885633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.885645 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.988985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.989032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.989043 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.989061 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:02 crc kubenswrapper[4684]: I0121 10:07:02.989071 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:02Z","lastTransitionTime":"2026-01-21T10:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.091647 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.091685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.091695 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.091715 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.091723 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.194617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.194666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.194676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.194694 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.194705 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.297903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.297948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.297964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.297990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.298006 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.400343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.400451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.400474 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.400503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.400526 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.503459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.503518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.503538 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.503564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.503581 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.507873 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:05:02.703388631 +0000 UTC Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.514207 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.514271 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.514207 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:03 crc kubenswrapper[4684]: E0121 10:07:03.514393 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:03 crc kubenswrapper[4684]: E0121 10:07:03.514490 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:03 crc kubenswrapper[4684]: E0121 10:07:03.514579 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.606956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.607029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.607050 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.607080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.607100 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.710625 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.710690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.710708 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.710733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.710748 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.814235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.814347 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.814405 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.814440 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.814462 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.918026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.918093 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.918116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.918144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:03 crc kubenswrapper[4684]: I0121 10:07:03.918159 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:03Z","lastTransitionTime":"2026-01-21T10:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.021198 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.021255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.021270 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.021290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.021301 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.125115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.125244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.125265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.125292 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.125308 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.228780 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.228939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.228979 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.229071 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.229146 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.332420 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.332467 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.332481 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.332501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.332514 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.436142 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.436219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.436243 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.436274 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.436299 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.509018 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:45:33.6161677 +0000 UTC Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.514680 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:04 crc kubenswrapper[4684]: E0121 10:07:04.514878 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.538888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.538937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.538952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.538972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.538986 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.641782 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.641835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.641850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.641870 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.641887 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.745673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.745735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.745751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.745776 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.745792 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.849746 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.849818 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.849837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.849865 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.849884 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.953280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.953334 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.953353 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.953402 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:04 crc kubenswrapper[4684]: I0121 10:07:04.953421 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:04Z","lastTransitionTime":"2026-01-21T10:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.056215 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.056268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.056286 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.056312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.056329 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.159437 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.159539 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.159559 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.159584 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.159598 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.262836 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.262893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.262906 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.262926 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.262939 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.366915 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.367026 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.367049 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.367200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.367229 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.470900 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.470952 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.470964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.470987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.470999 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.510131 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:59:34.051926318 +0000 UTC Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.513445 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:05 crc kubenswrapper[4684]: E0121 10:07:05.513658 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.513722 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.513782 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:05 crc kubenswrapper[4684]: E0121 10:07:05.513889 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:05 crc kubenswrapper[4684]: E0121 10:07:05.514085 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.574391 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.574452 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.574477 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.574511 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.574536 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.677823 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.677886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.677904 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.677934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.677952 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.781398 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.781470 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.781484 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.781537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.781553 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.885021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.885077 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.885091 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.885115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.885127 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.988160 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.988211 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.988224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.988241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:05 crc kubenswrapper[4684]: I0121 10:07:05.988256 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:05Z","lastTransitionTime":"2026-01-21T10:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.090736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.091080 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.091246 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.091407 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.091523 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.194529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.194754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.194858 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.194962 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.195074 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.297908 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.297964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.297979 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.298003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.298017 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.401314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.401635 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.401736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.401831 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.401895 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.481204 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.481244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.481256 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.481273 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.481284 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.500270 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.505097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.505127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.505139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.505156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.505167 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.510978 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 23:54:48.604303409 +0000 UTC Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.513516 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.513674 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.523701 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.528595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.528632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.528643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.528659 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.528671 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.545891 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.550841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.551099 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.551198 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.551307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.551435 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.566743 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.570903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.571058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.571181 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.571278 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.571697 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.594218 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:06 crc kubenswrapper[4684]: E0121 10:07:06.594470 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.596616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.596650 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.596661 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.596679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.596690 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.700058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.700098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.700109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.700132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.700148 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.803114 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.803154 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.803169 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.803187 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.803199 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.905862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.905927 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.905939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.905963 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:06 crc kubenswrapper[4684]: I0121 10:07:06.905978 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:06Z","lastTransitionTime":"2026-01-21T10:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.008612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.008680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.008698 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.008722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.008740 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.111645 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.111729 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.111743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.111768 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.111780 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.215184 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.215240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.215250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.215285 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.215296 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.317657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.317708 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.317726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.317751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.317766 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.420541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.420579 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.420591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.420607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.420619 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.512329 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:46:09.186575924 +0000 UTC Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.513548 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:07 crc kubenswrapper[4684]: E0121 10:07:07.513690 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.513772 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.513790 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:07 crc kubenswrapper[4684]: E0121 10:07:07.514184 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:07 crc kubenswrapper[4684]: E0121 10:07:07.514305 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.514557 4684 scope.go:117] "RemoveContainer" containerID="764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf" Jan 21 10:07:07 crc kubenswrapper[4684]: E0121 10:07:07.514831 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.523723 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.523790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.523815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.523848 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.523871 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.627040 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.627083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.627093 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.627127 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.627140 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.729964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.730016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.730029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.730048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.730060 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.832503 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.832801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.832868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.832951 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.833018 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.935635 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.935950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.936052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.936193 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:07 crc kubenswrapper[4684]: I0121 10:07:07.936292 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:07Z","lastTransitionTime":"2026-01-21T10:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.039133 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.039182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.039192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.039213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.039225 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.141516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.141859 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.141968 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.142067 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.142148 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.244881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.244930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.244945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.244970 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.244985 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.348222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.348273 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.348287 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.348306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.348344 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.450492 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.450540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.450551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.450567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.450577 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.503312 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:08 crc kubenswrapper[4684]: E0121 10:07:08.503592 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:07:08 crc kubenswrapper[4684]: E0121 10:07:08.503761 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:07:40.503727749 +0000 UTC m=+98.261810756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.513461 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:59:52.911006281 +0000 UTC Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.514110 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:08 crc kubenswrapper[4684]: E0121 10:07:08.514225 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.553059 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.553330 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.553468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.553598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.553694 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.655562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.655606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.655619 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.655636 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.655647 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.759583 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.759925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.760001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.760090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.760171 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.864025 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.864449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.864685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.864886 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.865055 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.967485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.967560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.967580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.967607 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:08 crc kubenswrapper[4684]: I0121 10:07:08.967628 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:08Z","lastTransitionTime":"2026-01-21T10:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.070131 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.070179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.070191 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.070253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.070270 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.172890 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.172939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.172955 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.172975 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.172989 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.281551 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.281600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.281633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.281655 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.281669 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.384695 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.384746 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.384761 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.384783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.384796 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.487689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.487725 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.487734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.487749 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.487759 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.513477 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.513520 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:09 crc kubenswrapper[4684]: E0121 10:07:09.513583 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.513648 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:29:25.900641778 +0000 UTC Jan 21 10:07:09 crc kubenswrapper[4684]: E0121 10:07:09.513693 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.513991 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:09 crc kubenswrapper[4684]: E0121 10:07:09.514073 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.591021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.591058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.591070 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.591091 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.591103 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.694523 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.694577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.694612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.694634 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.694644 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.797692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.797749 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.797762 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.797784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.797796 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.900027 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.900075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.900086 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.900104 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:09 crc kubenswrapper[4684]: I0121 10:07:09.900114 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:09Z","lastTransitionTime":"2026-01-21T10:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.002930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.002988 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.003006 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.003033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.003050 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.105624 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.105679 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.105694 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.105719 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.105733 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.208115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.208167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.208182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.208200 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.208213 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.311103 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.311156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.311170 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.311190 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.311202 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.414280 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.414336 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.414350 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.414388 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.414401 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.513901 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:42:23.286095854 +0000 UTC Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.514170 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:10 crc kubenswrapper[4684]: E0121 10:07:10.514321 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.516398 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.516430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.516443 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.516459 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.516468 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.619514 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.619558 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.619569 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.619589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.619604 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.722541 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.722603 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.722621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.722651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.722670 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.825258 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.825302 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.825313 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.825332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.825344 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.928198 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.928260 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.928276 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.928302 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:10 crc kubenswrapper[4684]: I0121 10:07:10.928317 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:10Z","lastTransitionTime":"2026-01-21T10:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.031516 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.031554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.031567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.031582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.031593 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.133704 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.133760 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.133776 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.133795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.133809 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.236578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.236653 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.236676 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.236711 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.236730 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.339311 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.339381 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.339396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.339417 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.339430 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.441884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.441938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.441949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.441969 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.441983 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.513990 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.514030 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.514030 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.514078 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:47:59.822423745 +0000 UTC Jan 21 10:07:11 crc kubenswrapper[4684]: E0121 10:07:11.514149 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:11 crc kubenswrapper[4684]: E0121 10:07:11.514416 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:11 crc kubenswrapper[4684]: E0121 10:07:11.514458 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.545307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.545403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.545428 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.545456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.545481 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.648449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.648485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.648495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.648513 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.648528 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.752180 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.752256 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.752274 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.752821 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.752894 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.856514 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.856565 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.856578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.856597 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.856611 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.960169 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.960225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.960239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.960264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.960278 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:11Z","lastTransitionTime":"2026-01-21T10:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.990840 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/0.log" Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.990901 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e7ac4c6-b960-418c-b057-e55d95a213cd" containerID="a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857" exitCode=1 Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.990940 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerDied","Data":"a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857"} Jan 21 10:07:11 crc kubenswrapper[4684]: I0121 10:07:11.991488 4684 scope.go:117] "RemoveContainer" containerID="a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.017318 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.034238 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.050584 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.065777 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.066205 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.066235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.066247 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.066267 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.066279 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.080727 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.093672 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.106573 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.122393 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.135380 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.153617 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.166404 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.169520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.169548 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.169558 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.169576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.169590 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.179919 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.194802 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.218848 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.231867 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.264748 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.275519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.275576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.275587 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.275613 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.275624 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.280449 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.296947 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"2026-01-21T10:06:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c\\\\n2026-01-21T10:06:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c to /host/opt/cni/bin/\\\\n2026-01-21T10:06:26Z [verbose] multus-daemon started\\\\n2026-01-21T10:06:26Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:07:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.378556 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.378586 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.378598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.378640 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.378652 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.481561 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.481596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.481611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.481629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.481641 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.514205 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:18:14.811177328 +0000 UTC Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.514390 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:12 crc kubenswrapper[4684]: E0121 10:07:12.514515 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.530495 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.544383 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.556089 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"2026-01-21T10:06:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c\\\\n2026-01-21T10:06:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c to /host/opt/cni/bin/\\\\n2026-01-21T10:06:26Z [verbose] multus-daemon started\\\\n2026-01-21T10:06:26Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:07:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.569095 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.582638 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.583933 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.583965 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.583976 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.583992 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.584002 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.596465 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.613350 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.624576 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.645981 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.659217 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.669680 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.683037 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.686351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.686394 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.686404 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.686422 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.686435 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.695545 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.708178 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.720269 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.730945 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.755112 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.769549 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:12Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.790063 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.790096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.790106 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.790124 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.790133 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.892300 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.892343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.892353 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.892385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.892397 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.995379 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.995432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.995446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.995466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.995482 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:12Z","lastTransitionTime":"2026-01-21T10:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.998476 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/0.log" Jan 21 10:07:12 crc kubenswrapper[4684]: I0121 10:07:12.998553 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerStarted","Data":"ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.024220 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.036485 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.053390 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.068807 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.082033 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"2026-01-21T10:06:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c\\\\n2026-01-21T10:06:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c to /host/opt/cni/bin/\\\\n2026-01-21T10:06:26Z [verbose] multus-daemon started\\\\n2026-01-21T10:06:26Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:07:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.097088 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.098325 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.098382 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.098399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.098419 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.098432 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.110046 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.122417 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.135068 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.145340 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.170475 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.184549 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.198703 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.200984 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.201010 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.201021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.201038 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.201049 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.214010 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.232538 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.246413 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.261334 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.274793 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.303779 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.303825 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.303837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.303857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.303870 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.406820 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.406869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.406880 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.406917 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.406930 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.508853 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.508892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.508903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.508921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.508934 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.514418 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.514398 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:53:55.94653232 +0000 UTC Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.514472 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.514418 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:13 crc kubenswrapper[4684]: E0121 10:07:13.514535 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:13 crc kubenswrapper[4684]: E0121 10:07:13.514671 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:13 crc kubenswrapper[4684]: E0121 10:07:13.514740 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.611536 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.611613 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.611627 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.611649 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.611663 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.714397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.714456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.714475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.714499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.714515 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.816694 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.816744 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.816757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.816774 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.816785 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.919201 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.919248 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.919259 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.919278 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:13 crc kubenswrapper[4684]: I0121 10:07:13.919289 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:13Z","lastTransitionTime":"2026-01-21T10:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.021849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.021902 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.021913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.021932 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.021942 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.124455 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.124529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.124548 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.124576 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.124593 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.227182 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.227221 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.227234 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.227253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.227266 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.330598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.330646 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.330657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.330674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.330684 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.432948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.432999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.433011 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.433030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.433042 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.514564 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.514560 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:07:41.860258136 +0000 UTC Jan 21 10:07:14 crc kubenswrapper[4684]: E0121 10:07:14.514771 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.535929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.535989 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.536008 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.536035 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.536053 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.638784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.638829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.638841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.638861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.638878 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.741740 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.741789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.741801 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.741824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.741837 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.844030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.844060 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.844069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.844084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.844095 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.946343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.946390 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.946399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.946414 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:14 crc kubenswrapper[4684]: I0121 10:07:14.946423 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:14Z","lastTransitionTime":"2026-01-21T10:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.049116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.049148 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.049161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.049179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.049190 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.152268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.152312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.152325 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.152344 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.152374 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.255448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.255486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.255497 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.255515 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.255528 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.357605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.357678 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.357697 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.357727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.357752 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.460422 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.460469 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.460483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.460504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.460517 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.513543 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.513650 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:15 crc kubenswrapper[4684]: E0121 10:07:15.513703 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:15 crc kubenswrapper[4684]: E0121 10:07:15.513818 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.513665 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:15 crc kubenswrapper[4684]: E0121 10:07:15.513958 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.515598 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:48:29.351919332 +0000 UTC Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.563045 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.563105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.563132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.563164 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.563186 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.665507 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.665555 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.665567 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.665602 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.665613 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.768797 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.768843 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.768860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.768878 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.768889 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.871313 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.871382 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.871396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.871419 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.871431 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.975932 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.975990 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.976003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.976023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:15 crc kubenswrapper[4684]: I0121 10:07:15.976035 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:15Z","lastTransitionTime":"2026-01-21T10:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.079062 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.079095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.079105 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.079122 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.079132 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.182232 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.182277 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.182289 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.182306 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.182317 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.285910 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.285961 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.285979 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.286001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.286013 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.395088 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.395123 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.395134 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.395154 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.395164 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.498628 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.498675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.498685 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.498702 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.498713 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.514574 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.514834 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.516561 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:23:24.513208699 +0000 UTC Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.602179 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.602223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.602235 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.602251 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.602265 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.613547 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.613591 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.613605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.613624 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.613636 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.628718 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.634449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.634537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.634562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.634597 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.634621 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.656213 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.662073 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.662141 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.662159 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.662183 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.662200 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.683190 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.688849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.688923 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.688949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.688986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.689009 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.709331 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.714951 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.715009 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.715020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.715038 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.715050 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.729785 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:16 crc kubenswrapper[4684]: E0121 10:07:16.729908 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.731860 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.731909 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.731924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.731950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.731966 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.834195 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.834261 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.834275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.834293 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.834307 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.937152 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.937214 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.937224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.937244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:16 crc kubenswrapper[4684]: I0121 10:07:16.937258 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:16Z","lastTransitionTime":"2026-01-21T10:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.040580 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.040626 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.040643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.040663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.040679 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.143985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.144048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.144064 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.144087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.144106 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.247029 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.247102 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.247126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.247156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.247177 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.350943 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.351027 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.351046 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.351070 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.351084 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.454653 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.454731 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.454757 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.454795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.454822 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.514524 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.514616 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:17 crc kubenswrapper[4684]: E0121 10:07:17.514702 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.514710 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:17 crc kubenswrapper[4684]: E0121 10:07:17.514831 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:17 crc kubenswrapper[4684]: E0121 10:07:17.515061 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.516976 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:49:31.66922734 +0000 UTC Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.557255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.557305 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.557317 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.557342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.557382 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.661087 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.661130 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.661143 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.661161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.661174 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.763944 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.763995 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.764007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.764027 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.764043 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.867992 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.868065 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.868084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.868111 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.868128 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.971493 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.971574 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.971609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.971633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:17 crc kubenswrapper[4684]: I0121 10:07:17.971658 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:17Z","lastTransitionTime":"2026-01-21T10:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.074818 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.074867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.074879 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.074901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.074917 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.178451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.178524 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.178544 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.178573 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.178593 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.282153 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.282223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.282242 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.282269 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.282288 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.385918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.385974 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.385985 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.386007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.386023 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.489345 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.489432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.489449 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.489471 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.489488 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.514561 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:18 crc kubenswrapper[4684]: E0121 10:07:18.514773 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.517196 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:24:42.520831918 +0000 UTC Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.592600 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.592656 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.592668 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.592689 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.592702 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.696322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.696445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.696456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.696475 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.696488 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.799425 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.799480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.799490 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.799508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.799518 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.927872 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.927953 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.927972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.927998 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:18 crc kubenswrapper[4684]: I0121 10:07:18.928021 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:18Z","lastTransitionTime":"2026-01-21T10:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.031846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.031950 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.032017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.032069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.032098 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.135651 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.135720 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.135739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.135770 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.135793 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.239226 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.239291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.239314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.239348 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.239433 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.342736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.342798 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.342817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.342843 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.342863 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.446482 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.446572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.446592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.446622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.446638 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.514126 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.514267 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:19 crc kubenswrapper[4684]: E0121 10:07:19.514354 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:19 crc kubenswrapper[4684]: E0121 10:07:19.514480 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.514486 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:19 crc kubenswrapper[4684]: E0121 10:07:19.515169 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.515830 4684 scope.go:117] "RemoveContainer" containerID="764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.518293 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:39:23.328259102 +0000 UTC Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.550002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.550056 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.550073 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.550099 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.550117 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.653415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.653473 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.653483 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.653502 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.653519 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.756795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.756841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.756857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.756879 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.756893 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.859777 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.859828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.859841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.859862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.859875 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.963343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.963621 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.963644 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.963673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:19 crc kubenswrapper[4684]: I0121 10:07:19.963692 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:19Z","lastTransitionTime":"2026-01-21T10:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.024201 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/2.log" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.027850 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.029083 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.047886 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.063079 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.066738 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.066783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.066802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.066829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.066848 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.084092 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"2026-01-21T10:06:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c\\\\n2026-01-21T10:06:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c to /host/opt/cni/bin/\\\\n2026-01-21T10:06:26Z [verbose] multus-daemon started\\\\n2026-01-21T10:06:26Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:07:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.108176 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.123960 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.137646 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.151140 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.163483 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.169830 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.169868 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.169881 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.169903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.169916 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.174787 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.189049 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.204476 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.223139 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.246379 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.259565 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.269885 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.274238 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.274313 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.274330 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.274350 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.274433 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.285633 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.306975 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.318082 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.377907 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.377949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.377959 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.377977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.377987 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.480351 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.480415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.480427 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.480450 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.480463 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.514181 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:20 crc kubenswrapper[4684]: E0121 10:07:20.514385 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.518885 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 00:14:58.010519638 +0000 UTC Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.583109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.583147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.583159 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.583177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.583188 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.685826 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.685884 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.685901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.685925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.685948 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.788150 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.788522 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.788537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.788557 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.788570 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.891690 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.891753 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.891767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.891787 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.891799 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.994002 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.994055 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.994073 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.994098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:20 crc kubenswrapper[4684]: I0121 10:07:20.994114 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:20Z","lastTransitionTime":"2026-01-21T10:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.096632 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.096680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.096695 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.096721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.096739 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.199792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.199832 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.199846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.199863 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.199876 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.302740 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.302810 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.302834 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.302869 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.302893 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.405643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.405703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.405727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.405754 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.405767 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.509750 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.509816 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.509892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.509960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.509983 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.513897 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.513941 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.513942 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:21 crc kubenswrapper[4684]: E0121 10:07:21.514075 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:21 crc kubenswrapper[4684]: E0121 10:07:21.514162 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:21 crc kubenswrapper[4684]: E0121 10:07:21.514265 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.520000 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:37:17.462863504 +0000 UTC Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.613138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.613236 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.613262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.613288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.613303 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.716132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.716203 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.716227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.716262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.716287 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.819640 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.819683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.819697 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.819717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.819732 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.922895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.922960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.922978 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.923005 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:21 crc kubenswrapper[4684]: I0121 10:07:21.923027 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:21Z","lastTransitionTime":"2026-01-21T10:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.026714 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.026784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.026862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.026928 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.026953 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.038046 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/3.log" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.038829 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/2.log" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.043074 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" exitCode=1 Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.043165 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.043256 4684 scope.go:117] "RemoveContainer" containerID="764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.044524 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:07:22 crc kubenswrapper[4684]: E0121 10:07:22.044820 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.065990 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.086661 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.109961 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"2026-01-21T10:06:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c\\\\n2026-01-21T10:06:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c to /host/opt/cni/bin/\\\\n2026-01-21T10:06:26Z [verbose] multus-daemon started\\\\n2026-01-21T10:06:26Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:07:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.130524 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.130633 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.130671 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.130691 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.130705 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.134395 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.149707 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.166150 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.182917 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.196605 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.234421 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:21Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-sff6s\\\\nI0121 10:07:20.957652 6679 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-sff6s\\\\nI0121 10:07:20.957659 6679 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-sff6s in node crc\\\\nI0121 10:07:20.957664 6679 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-sff6s after 0 failed attempt(s)\\\\nI0121 10:07:20.957669 6679 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sff6s\\\\nI0121 10:07:20.957687 6679 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-gshl8\\\\nF0121 10:07:20.957691 6679 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.237178 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.237244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.237268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.237309 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.237330 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.256661 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.274964 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.291682 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.307292 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.323309 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.340081 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.340138 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.340394 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.340415 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.340445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.340465 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.358039 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.392154 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.407615 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.443946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.444007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.444020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.444042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.444056 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.514748 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:22 crc kubenswrapper[4684]: E0121 10:07:22.514947 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.520138 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:46:24.884542656 +0000 UTC Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.534824 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://790657fdb00f7ee639f03d75ec1a227905b6f20fcf1616600d3c94fd4707ea83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3141adce7086aa8fd254d9cdcee933ed07d4f7c7debf319f2dff56b06a5b71ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.547288 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.547332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.547346 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.547386 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.547401 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.555088 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.573453 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.595121 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xpk8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af519c74-92e1-4b1e-84a9-148aa5d0aa2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54b430f4d2f23e4d8a99bd5c5df97a15eebfad3f287f23e3c174fa19e80c559b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b95j9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xpk8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.626975 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dac888b-051f-405a-8c23-60c205d2aecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://764fb7faccdef6be93c776669dd962f59f34450446b728987af9b182aba23daf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:06:51Z\\\",\\\"message\\\":\\\"70 6294 services_controller.go:444] Built service openshift-marketplace/community-operators LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634485 6294 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0121 10:06:51.634499 6294 services_controller.go:444] Built service openshift-kube-apiserver-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.633708 6294 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0121 10:06:51.634508 6294 services_controller.go:445] Built service openshift-kube-apiserver-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nI0121 10:06:51.634522 6294 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", E\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:21Z\\\",\\\"message\\\":\\\"303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-sff6s\\\\nI0121 10:07:20.957652 6679 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-sff6s\\\\nI0121 10:07:20.957659 6679 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-sff6s in node crc\\\\nI0121 10:07:20.957664 6679 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-sff6s after 0 failed attempt(s)\\\\nI0121 10:07:20.957669 6679 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-sff6s\\\\nI0121 10:07:20.957687 6679 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-gshl8\\\\nF0121 10:07:20.957691 6679 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: In\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:07:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h68cm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6vjwl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.649609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.649665 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.649680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.649703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.649717 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.650420 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d8e76ee-8f45-4723-92aa-8b4c9e20dd38\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:06:16.529914 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:06:16.536673 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1741332029/tls.crt::/tmp/serving-cert-1741332029/tls.key\\\\\\\"\\\\nI0121 10:06:22.014413 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:06:22.023098 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:06:22.023124 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:06:22.023144 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:06:22.023162 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:06:22.053748 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:06:22.053856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053884 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:06:22.053912 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:06:22.053933 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:06:22.053953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:06:22.053974 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:06:22.056482 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:06:22.056738 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.668666 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.683519 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://925b275b3ce3428bc68dd39075a8e86e83b35703ea89a529c6ce6bbf53fa215a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.696382 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55d2c484-cf10-46b5-913f-2a033a2ff5c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a504cdf3dfd79c48b74d53b7fae8cb4ee07edf5b95c3aab58c3351727f8754d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-db9hs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sff6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.709937 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"feda3c91-51cd-4b8e-becd-177b669f0ee1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc4a441c86c981477aad2074fb6af273a45679d04b8f017405e4259fa3af0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cfbfdabb57b220a10323f2265b70a9583abee7feee55914699d619d556f9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v75dd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.720712 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58x9s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7wzh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.752264 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"954ca378-3bb0-41f8-9add-3a52a8cf8954\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://763603568bb7e4c89e98dc691c95ca54257c4615d3caf1550f5dc399b7be26af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://25f36a1b795e2add3059ad348f6893f52cf6f9d53cbe73b31669f732e0b46a81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f208b4ff9c2d3e45cf8c338a42eaff90f551f49c7b98fe724ec8b8157e8b4a56\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.752596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.752616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.752629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.752649 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.752661 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.776776 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a07df987-cbd3-446c-92d6-b32c0d9b73b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8af2a9e2c9cccbb9146355c3d2bbadbb7e809cadbe9f03418a572949b5fef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://daf5cff7de59dbee28051caf06c20848d8410ecd30957032e73fb50e08205a34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://754bf369301df94a8be477f1c7520ddd02a55a6ff977a9a56f2d6a5798e114da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8347766168f6c478e865958122b7baaf91d9e888070df679c7710b2aa4092599\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.808593 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c758a234-1518-4e19-b3b7-b65d69fd1bb9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d23ebc5c246ca35fc5d31295420828d8ba66a26b39b5b6fe3522c04bc8f8b10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0db921078457fc1125a8beab2da96329dc3891ed976051233d76b707760c311b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21d567946cde7f3243b4d83910a350c3cfc97e1eb9d51cbc3274e61dba56eae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1124cb6c548323b2dc7e293ed550a11d345a86627cfa7221d08b7b934ef5fd0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c6f15ca4b02c1a6a0fbf5d36f1326260d448bf14de116d49f805a648e14b7b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8ee98c07f2a07ce43ffe8fda546cb95b14baea1cebd7fdef4acd37d20f906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69048bfcb9fe6842edd30a0c1e4d6c1d6c16c208ca3f6ca10f485ed528c05767\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:04Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b08efe5408cd9b27d966547c68859c26c206ce01927cc7609ca3e6489b1831c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:02Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.821460 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hktl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6d6e52c-91fd-469d-af23-45c3833eb9d7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://637e8c501924c32a0dc19e2006504a2724f48712feacd1e94c143c4013887ac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dp8lj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hktl5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.839849 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6jwd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7ac4c6-b960-418c-b057-e55d95a213cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:07:11Z\\\",\\\"message\\\":\\\"2026-01-21T10:06:25+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c\\\\n2026-01-21T10:06:25+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_44d36849-1a74-413c-a296-be2e8a3b5f1c to /host/opt/cni/bin/\\\\n2026-01-21T10:06:26Z [verbose] multus-daemon started\\\\n2026-01-21T10:06:26Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:07:11Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t888c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6jwd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.855255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.855410 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.855468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.855542 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.855579 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.858061 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:23Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1e09ab9aea33faca258ca0c0c40740ffbdadb8ef03f6b9c254e1472cc047ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.877574 4684 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gshl8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2704b9e-474a-466a-b78c-d136a2f95a3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:06:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35c90d2149e2b9ca12f6ef71ae05e440a733043e4c993e48adc0651a2dc7dd56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c23de6c3d9576a8945290bf6a61e0c0eb3fb4ea92c15572bbc84606e7489d9f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceb93acc9ea6e741d80aeef968d1863585f4339478a1bb585458cb9e36013b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://218c67cbf4b34699aacf6dd517cc7021615b1528ff04c4f1f70e8fe092a3487f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3451bd9d91062cecf2237fdf3e155c3806b0a46ac143415866c8b73506d6fa28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://473c931e25be6a7e693885f8ec5a13fa0b607faf275fcbdfff2f571649b06b7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51684150914b9009333f0426101d64362d9470454436eed500bebde715463c3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:06:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsmbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:06:23Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gshl8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.957855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.957901 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.957913 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.957936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:22 crc kubenswrapper[4684]: I0121 10:07:22.957948 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:22Z","lastTransitionTime":"2026-01-21T10:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.050593 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/3.log" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.060755 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.060800 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.060811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.060826 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.060837 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.163457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.163506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.163517 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.163536 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.163551 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.266960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.267033 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.267053 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.267083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.267103 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.373660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.373739 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.373751 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.373772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.373786 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.476939 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.476984 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.476994 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.477012 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.477023 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.514314 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:23 crc kubenswrapper[4684]: E0121 10:07:23.514507 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.514596 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:23 crc kubenswrapper[4684]: E0121 10:07:23.514658 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.514674 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:23 crc kubenswrapper[4684]: E0121 10:07:23.514993 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.521263 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:24:47.870669968 +0000 UTC Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.579743 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.579813 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.579829 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.579855 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.579868 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.683165 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.683207 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.683222 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.683239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.683250 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.785924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.785969 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.785980 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.786000 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.786014 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.889735 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.889802 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.889818 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.889875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.889893 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.992904 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.992958 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.992971 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.992989 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:23 crc kubenswrapper[4684]: I0121 10:07:23.992999 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:23Z","lastTransitionTime":"2026-01-21T10:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.096189 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.096250 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.096268 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.096291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.096310 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.199452 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.199509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.199520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.199537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.199547 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.302169 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.302243 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.302265 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.302296 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.302321 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.405326 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.405401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.405416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.405439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.405452 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.508451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.508496 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.508509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.508530 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.508543 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.514067 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:24 crc kubenswrapper[4684]: E0121 10:07:24.514296 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.521977 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:00:02.237681081 +0000 UTC Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.611596 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.611669 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.611688 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.611718 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.611743 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.714945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.715003 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.715021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.715048 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.715064 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.818202 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.818248 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.818284 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.818303 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.818314 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.921254 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.921308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.921322 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.921339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:24 crc kubenswrapper[4684]: I0121 10:07:24.921349 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:24Z","lastTransitionTime":"2026-01-21T10:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.023672 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.023752 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.023767 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.023790 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.023802 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.127001 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.127052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.127069 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.127098 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.127138 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.230100 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.230180 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.230244 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.230275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.230295 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.333226 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.333307 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.333333 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.333408 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.333449 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.436056 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.436113 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.436133 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.436156 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.436168 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.513897 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.513897 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.514150 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.513932 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.514483 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.514517 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.522218 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:56:52.12761205 +0000 UTC Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.539585 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.539648 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.539666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.539692 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.539709 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.642962 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.643036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.643058 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.643090 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.643109 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.747042 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.747117 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.747136 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.747167 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.747191 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.824340 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.824513 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.824560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824612 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.824564764 +0000 UTC m=+147.582647771 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824684 4684 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824745 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.82472746 +0000 UTC m=+147.582810427 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.824771 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824913 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824956 4684 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824970 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.824998 4684 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.825040 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.82502084 +0000 UTC m=+147.583103847 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.825101 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.825070852 +0000 UTC m=+147.583153859 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.825486 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.825676 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.825720 4684 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.825744 4684 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:07:25 crc kubenswrapper[4684]: E0121 10:07:25.825821 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.825796177 +0000 UTC m=+147.583879184 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.850177 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.850243 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.850267 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.850301 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.850325 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.953727 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.953792 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.953807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.953827 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:25 crc kubenswrapper[4684]: I0121 10:07:25.953841 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:25Z","lastTransitionTime":"2026-01-21T10:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.056320 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.056382 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.056396 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.056416 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.056430 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.159057 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.159115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.159133 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.159161 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.159181 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.261525 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.261572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.261589 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.261611 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.261626 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.365439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.365523 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.365549 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.365584 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.365613 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.468397 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.468457 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.468477 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.468499 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.468513 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.513578 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:26 crc kubenswrapper[4684]: E0121 10:07:26.513763 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.522553 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:10:47.733744009 +0000 UTC Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.571783 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.571898 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.571914 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.571931 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.571942 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.675629 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.675703 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.675722 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.676198 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.676258 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.780733 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.780796 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.780820 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.780850 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.780871 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.883461 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.883528 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.883540 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.883563 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.883578 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.986775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.986888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.986909 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.986937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:26 crc kubenswrapper[4684]: I0121 10:07:26.986954 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:26Z","lastTransitionTime":"2026-01-21T10:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.090172 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.090281 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.090295 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.090314 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.090327 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.099895 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.099946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.099961 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.099983 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.100000 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.117307 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.123378 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.123431 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.123444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.123466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.123480 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.139995 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.144593 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.144648 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.144663 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.144683 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.144696 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.160193 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.164185 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.164239 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.164253 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.164274 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.164285 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.179028 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.183980 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.184016 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.184027 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.184045 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.184056 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.197503 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:07:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"74077632-3431-446e-a506-0b618e843835\\\",\\\"systemUUID\\\":\\\"667581df-3edd-489b-b118-69df188c96a2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:07:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.197684 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.199318 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.199353 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.199383 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.199403 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.199415 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.301932 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.301987 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.302000 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.302020 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.302036 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.405139 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.405198 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.405218 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.405241 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.405259 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.507873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.507922 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.507940 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.507964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.507979 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.514377 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.514480 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.514629 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.514742 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.514733 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:27 crc kubenswrapper[4684]: E0121 10:07:27.515086 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.522713 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:19:36.876815312 +0000 UTC Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.610698 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.610741 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.610758 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.610784 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.610802 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.719873 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.719924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.719936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.719956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.719969 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.822339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.822436 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.822451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.822480 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.822494 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.925446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.925506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.925519 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.925543 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:27 crc kubenswrapper[4684]: I0121 10:07:27.925557 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:27Z","lastTransitionTime":"2026-01-21T10:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.028716 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.028773 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.028794 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.028815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.028828 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.132220 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.132308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.132328 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.132399 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.132421 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.235828 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.235888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.235897 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.235924 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.235938 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.338862 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.338925 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.338941 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.338960 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.338973 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.441946 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.441993 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.442005 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.442023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.442037 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.514183 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:28 crc kubenswrapper[4684]: E0121 10:07:28.514413 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.523437 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:14:52.206411672 +0000 UTC Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.545320 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.545406 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.545421 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.545444 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.545461 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.650083 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.650612 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.650622 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.650642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.650654 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.754385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.754447 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.754461 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.754485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.754502 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.857643 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.857744 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.857775 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.857812 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.857840 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.960966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.961005 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.961015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.961030 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:28 crc kubenswrapper[4684]: I0121 10:07:28.961041 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:28Z","lastTransitionTime":"2026-01-21T10:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.064112 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.064197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.064223 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.064255 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.064279 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.168164 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.168225 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.168245 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.168266 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.168280 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.271582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.271657 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.271678 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.271707 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.271725 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.374520 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.374569 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.374578 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.374594 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.374604 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.477456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.477495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.477507 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.477529 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.477543 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.513736 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:29 crc kubenswrapper[4684]: E0121 10:07:29.513904 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.514141 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:29 crc kubenswrapper[4684]: E0121 10:07:29.514220 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.514484 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:29 crc kubenswrapper[4684]: E0121 10:07:29.514559 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.523958 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:44:48.723025218 +0000 UTC Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.579501 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.579542 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.579554 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.579572 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.579582 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.682095 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.682133 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.682144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.682162 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.682173 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.785711 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.785962 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.785981 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.786007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.786025 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.888460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.888509 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.888527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.888553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.888571 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.991964 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.992023 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.992047 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.992078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:29 crc kubenswrapper[4684]: I0121 10:07:29.992100 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:29Z","lastTransitionTime":"2026-01-21T10:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.095291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.095401 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.095430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.095468 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.095493 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.198324 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.198420 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.198434 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.198460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.198475 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.302044 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.302109 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.302121 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.302141 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.302154 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.404929 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.404999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.405021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.405052 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.405077 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.508407 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.508456 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.508466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.508485 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.508535 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.514484 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:30 crc kubenswrapper[4684]: E0121 10:07:30.514711 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.524438 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:23:01.498537452 +0000 UTC Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.615677 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.615772 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.615789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.615815 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.615834 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.719339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.719446 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.719466 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.719495 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.719514 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.822451 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.822512 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.822532 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.822557 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.822577 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.925931 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.925986 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.925999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.926021 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:30 crc kubenswrapper[4684]: I0121 10:07:30.926034 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:30Z","lastTransitionTime":"2026-01-21T10:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.028795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.028852 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.028867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.028888 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.028902 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.132208 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.132262 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.132275 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.132292 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.132307 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.236031 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.236084 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.236097 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.236116 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.236129 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.339857 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.339977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.339989 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.340007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.340019 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.443432 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.443486 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.443498 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.443517 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.443530 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.513827 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.513930 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.514026 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:31 crc kubenswrapper[4684]: E0121 10:07:31.514079 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:31 crc kubenswrapper[4684]: E0121 10:07:31.514244 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:31 crc kubenswrapper[4684]: E0121 10:07:31.514398 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.525002 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:54:29.940048233 +0000 UTC Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.546823 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.546867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.546877 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.546903 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.546916 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.650185 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.650246 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.650264 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.650291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.650308 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.753745 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.753817 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.753835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.753861 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.753880 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.856510 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.856553 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.856570 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.856592 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.856604 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.959609 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.959680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.959694 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.959712 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:31 crc kubenswrapper[4684]: I0121 10:07:31.959728 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:31Z","lastTransitionTime":"2026-01-21T10:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.062867 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.062921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.062930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.062953 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.062966 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.165893 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.165938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.165949 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.165967 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.165978 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.269096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.269173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.269193 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.269219 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.269238 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.371765 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.371807 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.371822 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.371847 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.371863 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.474151 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.474197 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.474213 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.474238 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.474248 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.513829 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:32 crc kubenswrapper[4684]: E0121 10:07:32.514241 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.525401 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:32:39.744045237 +0000 UTC Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.579680 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.579734 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.579748 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.579771 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.579785 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.625006 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xpk8b" podStartSLOduration=71.624982737 podStartE2EDuration="1m11.624982737s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.593213104 +0000 UTC m=+90.351296091" watchObservedRunningTime="2026-01-21 10:07:32.624982737 +0000 UTC m=+90.383065704" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.667475 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.667457335 podStartE2EDuration="1m10.667457335s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.645191905 +0000 UTC m=+90.403274882" watchObservedRunningTime="2026-01-21 10:07:32.667457335 +0000 UTC m=+90.425540302" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.681564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.681605 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.681616 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.681634 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.681647 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.709001 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podStartSLOduration=70.70898216 podStartE2EDuration="1m10.70898216s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.693645392 +0000 UTC m=+90.451728369" watchObservedRunningTime="2026-01-21 10:07:32.70898216 +0000 UTC m=+90.467065127" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.722101 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzdqt" podStartSLOduration=70.722082098 podStartE2EDuration="1m10.722082098s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.708812574 +0000 UTC m=+90.466895541" watchObservedRunningTime="2026-01-21 10:07:32.722082098 +0000 UTC m=+90.480165065" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.740123 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.74009101 podStartE2EDuration="1m8.74009101s" podCreationTimestamp="2026-01-21 10:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.739289621 +0000 UTC m=+90.497372598" watchObservedRunningTime="2026-01-21 10:07:32.74009101 +0000 UTC m=+90.498173977" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.753600 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.753580182 podStartE2EDuration="38.753580182s" podCreationTimestamp="2026-01-21 10:06:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.753015372 +0000 UTC m=+90.511098339" watchObservedRunningTime="2026-01-21 10:07:32.753580182 +0000 UTC m=+90.511663149" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.778765 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.778744373 podStartE2EDuration="1m11.778744373s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.776840676 +0000 UTC m=+90.534923643" watchObservedRunningTime="2026-01-21 10:07:32.778744373 +0000 UTC m=+90.536827340" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.783943 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.783999 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.784015 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.784038 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.784053 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.790449 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hktl5" podStartSLOduration=71.790428203 podStartE2EDuration="1m11.790428203s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.789945116 +0000 UTC m=+90.548028093" watchObservedRunningTime="2026-01-21 10:07:32.790428203 +0000 UTC m=+90.548511190" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.822939 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6jwd4" podStartSLOduration=70.822917481 podStartE2EDuration="1m10.822917481s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.807142148 +0000 UTC m=+90.565225125" watchObservedRunningTime="2026-01-21 10:07:32.822917481 +0000 UTC m=+90.581000448" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.841443 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gshl8" podStartSLOduration=70.841421759 podStartE2EDuration="1m10.841421759s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:32.841405968 +0000 UTC m=+90.599488935" watchObservedRunningTime="2026-01-21 10:07:32.841421759 +0000 UTC m=+90.599504726" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.886837 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.887067 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.887078 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.887096 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.887107 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.989966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.990036 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.990054 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.990077 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:32 crc kubenswrapper[4684]: I0121 10:07:32.990091 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:32Z","lastTransitionTime":"2026-01-21T10:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.092297 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.092332 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.092342 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.092369 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.092390 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.194841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.194906 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.194921 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.194948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.194966 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.298148 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.298194 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.298210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.298231 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.298240 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.401192 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.401249 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.401267 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.401291 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.401304 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.504157 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.504196 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.504206 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.504224 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.504237 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.513780 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.513876 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:33 crc kubenswrapper[4684]: E0121 10:07:33.513899 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:33 crc kubenswrapper[4684]: E0121 10:07:33.514352 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.514571 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:33 crc kubenswrapper[4684]: E0121 10:07:33.514689 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.525912 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:21:31.298722712 +0000 UTC Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.606977 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.607007 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.607017 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.607032 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.607042 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.709800 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.709846 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.709856 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.709875 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.709886 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.812308 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.812343 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.812354 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.812387 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.812397 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.915795 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.915849 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.915865 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.915885 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:33 crc kubenswrapper[4684]: I0121 10:07:33.915900 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:33Z","lastTransitionTime":"2026-01-21T10:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.018472 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.018524 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.018537 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.018557 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.018573 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.121315 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.121442 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.121460 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.121487 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.121503 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.223871 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.223930 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.223945 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.223966 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.223978 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.326841 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.326920 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.326934 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.326956 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.326974 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.429742 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.429799 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.429811 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.429831 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.429844 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.514557 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:34 crc kubenswrapper[4684]: E0121 10:07:34.514767 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.526345 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:46:51.518106412 +0000 UTC Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.532506 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.532595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.532617 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.532648 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.532670 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.634892 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.634948 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.634957 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.634975 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.634985 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.738147 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.738227 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.738252 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.738290 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.738312 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.841267 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.841315 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.841330 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.841350 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.841390 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.943666 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.943713 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.943726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.943746 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:34 crc kubenswrapper[4684]: I0121 10:07:34.943763 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:34Z","lastTransitionTime":"2026-01-21T10:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.046658 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.046717 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.046736 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.046759 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.046778 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.149588 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.149641 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.149660 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.149684 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.149703 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.252510 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.252564 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.252577 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.252598 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.252614 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.355130 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.355284 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.355301 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.355321 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.355335 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.458385 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.458473 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.458491 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.458518 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.458538 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.513703 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.513787 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:35 crc kubenswrapper[4684]: E0121 10:07:35.513913 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.513976 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:35 crc kubenswrapper[4684]: E0121 10:07:35.514109 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:35 crc kubenswrapper[4684]: E0121 10:07:35.514222 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.527181 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:07:05.950332804 +0000 UTC Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.560614 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.560677 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.560695 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.560721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.560741 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.663560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.663634 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.663649 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.663673 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.663693 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.768132 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.768257 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.768286 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.768312 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.768327 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.872560 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.872627 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.872642 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.872674 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.872688 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.975726 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.975789 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.975803 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.975826 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:35 crc kubenswrapper[4684]: I0121 10:07:35.975841 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:35Z","lastTransitionTime":"2026-01-21T10:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.079842 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.079918 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.079938 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.079972 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.079992 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.183458 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.183535 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.183552 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.183575 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.183592 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.286439 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.286550 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.286562 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.286582 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.286597 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.389824 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.389889 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.389909 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.389937 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.389955 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.492075 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.492115 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.492126 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.492144 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.492161 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.533716 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:08:35.68849008 +0000 UTC Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.534878 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:07:36 crc kubenswrapper[4684]: E0121 10:07:36.535091 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.535165 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:36 crc kubenswrapper[4684]: E0121 10:07:36.535540 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.553696 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.595604 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.595675 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.595693 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.595721 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.595741 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.699339 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.699434 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.699448 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.699474 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.699492 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.802430 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.802491 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.802504 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.802527 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.802542 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.905544 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.905595 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.905606 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.905623 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:36 crc kubenswrapper[4684]: I0121 10:07:36.905636 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:36Z","lastTransitionTime":"2026-01-21T10:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.008174 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.008246 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.008271 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.008305 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.008329 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:37Z","lastTransitionTime":"2026-01-21T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.111445 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.111494 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.111508 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.111525 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.111536 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:37Z","lastTransitionTime":"2026-01-21T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.213835 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.213936 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.213963 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.213994 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.214017 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:37Z","lastTransitionTime":"2026-01-21T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.277173 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.277210 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.277220 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.277240 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.277249 4684 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:07:37Z","lastTransitionTime":"2026-01-21T10:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.340197 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s"] Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.340662 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.343528 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.343638 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.343651 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.343736 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.350928 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/930f7ec9-6ff6-4b45-91fe-825fa941a474-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.350999 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/930f7ec9-6ff6-4b45-91fe-825fa941a474-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.351062 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/930f7ec9-6ff6-4b45-91fe-825fa941a474-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.351083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930f7ec9-6ff6-4b45-91fe-825fa941a474-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.351100 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/930f7ec9-6ff6-4b45-91fe-825fa941a474-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.373102 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.373079055 podStartE2EDuration="1.373079055s" podCreationTimestamp="2026-01-21 10:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:37.372561926 +0000 UTC m=+95.130644913" watchObservedRunningTime="2026-01-21 10:07:37.373079055 +0000 UTC m=+95.131162022" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.452459 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/930f7ec9-6ff6-4b45-91fe-825fa941a474-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.452523 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930f7ec9-6ff6-4b45-91fe-825fa941a474-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.452560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/930f7ec9-6ff6-4b45-91fe-825fa941a474-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.452731 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/930f7ec9-6ff6-4b45-91fe-825fa941a474-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.452852 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/930f7ec9-6ff6-4b45-91fe-825fa941a474-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.453022 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/930f7ec9-6ff6-4b45-91fe-825fa941a474-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.453132 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/930f7ec9-6ff6-4b45-91fe-825fa941a474-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.454484 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/930f7ec9-6ff6-4b45-91fe-825fa941a474-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.461906 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/930f7ec9-6ff6-4b45-91fe-825fa941a474-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.472229 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/930f7ec9-6ff6-4b45-91fe-825fa941a474-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6l46s\" (UID: \"930f7ec9-6ff6-4b45-91fe-825fa941a474\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.513592 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.513723 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.513848 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:37 crc kubenswrapper[4684]: E0121 10:07:37.514040 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:37 crc kubenswrapper[4684]: E0121 10:07:37.514229 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:37 crc kubenswrapper[4684]: E0121 10:07:37.514540 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.534809 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:41:01.244237054 +0000 UTC Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.534898 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.547386 4684 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:07:37 crc kubenswrapper[4684]: I0121 10:07:37.658910 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" Jan 21 10:07:38 crc kubenswrapper[4684]: I0121 10:07:38.109437 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" event={"ID":"930f7ec9-6ff6-4b45-91fe-825fa941a474","Type":"ContainerStarted","Data":"f318bb2539d2ceca9326c628425ee07dfcfdc0b9c2a05f808527c90e181f8ac0"} Jan 21 10:07:38 crc kubenswrapper[4684]: I0121 10:07:38.109889 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" event={"ID":"930f7ec9-6ff6-4b45-91fe-825fa941a474","Type":"ContainerStarted","Data":"3e878e2f83f9b200e8a6f49fb0b1ee6fe451b2ed1ce4fcd903ff4c52f094d1f8"} Jan 21 10:07:38 crc kubenswrapper[4684]: I0121 10:07:38.129333 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6l46s" podStartSLOduration=76.129316617 podStartE2EDuration="1m16.129316617s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:07:38.128048933 +0000 UTC m=+95.886131900" watchObservedRunningTime="2026-01-21 10:07:38.129316617 +0000 UTC m=+95.887399594" Jan 21 10:07:38 crc kubenswrapper[4684]: I0121 10:07:38.514215 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:38 crc kubenswrapper[4684]: E0121 10:07:38.514492 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:39 crc kubenswrapper[4684]: I0121 10:07:39.513673 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:39 crc kubenswrapper[4684]: I0121 10:07:39.513732 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:39 crc kubenswrapper[4684]: I0121 10:07:39.513820 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:39 crc kubenswrapper[4684]: E0121 10:07:39.513872 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:39 crc kubenswrapper[4684]: E0121 10:07:39.514106 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:39 crc kubenswrapper[4684]: E0121 10:07:39.514127 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:40 crc kubenswrapper[4684]: I0121 10:07:40.514134 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:40 crc kubenswrapper[4684]: E0121 10:07:40.514531 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:40 crc kubenswrapper[4684]: I0121 10:07:40.590156 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:40 crc kubenswrapper[4684]: E0121 10:07:40.590422 4684 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:07:40 crc kubenswrapper[4684]: E0121 10:07:40.590567 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs podName:49971ee3-e56a-4d50-8fc5-231bdcfc92d5 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:44.590538009 +0000 UTC m=+162.348620976 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs") pod "network-metrics-daemon-7wzh7" (UID: "49971ee3-e56a-4d50-8fc5-231bdcfc92d5") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:07:41 crc kubenswrapper[4684]: I0121 10:07:41.514351 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:41 crc kubenswrapper[4684]: I0121 10:07:41.514467 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:41 crc kubenswrapper[4684]: E0121 10:07:41.515012 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:41 crc kubenswrapper[4684]: I0121 10:07:41.514482 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:41 crc kubenswrapper[4684]: E0121 10:07:41.515155 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:41 crc kubenswrapper[4684]: E0121 10:07:41.514848 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:42 crc kubenswrapper[4684]: I0121 10:07:42.514270 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:42 crc kubenswrapper[4684]: E0121 10:07:42.516266 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:43 crc kubenswrapper[4684]: I0121 10:07:43.513761 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:43 crc kubenswrapper[4684]: I0121 10:07:43.513827 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:43 crc kubenswrapper[4684]: I0121 10:07:43.513855 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:43 crc kubenswrapper[4684]: E0121 10:07:43.513928 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:43 crc kubenswrapper[4684]: E0121 10:07:43.514078 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:43 crc kubenswrapper[4684]: E0121 10:07:43.514214 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:44 crc kubenswrapper[4684]: I0121 10:07:44.514596 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:44 crc kubenswrapper[4684]: E0121 10:07:44.514820 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:45 crc kubenswrapper[4684]: I0121 10:07:45.513770 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:45 crc kubenswrapper[4684]: I0121 10:07:45.513856 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:45 crc kubenswrapper[4684]: E0121 10:07:45.513953 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:45 crc kubenswrapper[4684]: I0121 10:07:45.513768 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:45 crc kubenswrapper[4684]: E0121 10:07:45.514124 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:45 crc kubenswrapper[4684]: E0121 10:07:45.514225 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:46 crc kubenswrapper[4684]: I0121 10:07:46.514649 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:46 crc kubenswrapper[4684]: E0121 10:07:46.514877 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:47 crc kubenswrapper[4684]: I0121 10:07:47.514300 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:47 crc kubenswrapper[4684]: I0121 10:07:47.514540 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:47 crc kubenswrapper[4684]: I0121 10:07:47.514574 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:47 crc kubenswrapper[4684]: E0121 10:07:47.514711 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:47 crc kubenswrapper[4684]: E0121 10:07:47.514760 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:47 crc kubenswrapper[4684]: E0121 10:07:47.514828 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:48 crc kubenswrapper[4684]: I0121 10:07:48.514760 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:48 crc kubenswrapper[4684]: E0121 10:07:48.514970 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:49 crc kubenswrapper[4684]: I0121 10:07:49.514492 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:49 crc kubenswrapper[4684]: I0121 10:07:49.514585 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:49 crc kubenswrapper[4684]: I0121 10:07:49.514571 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:49 crc kubenswrapper[4684]: E0121 10:07:49.514755 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:49 crc kubenswrapper[4684]: E0121 10:07:49.515005 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:49 crc kubenswrapper[4684]: E0121 10:07:49.515250 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:50 crc kubenswrapper[4684]: I0121 10:07:50.514346 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:50 crc kubenswrapper[4684]: E0121 10:07:50.514603 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:50 crc kubenswrapper[4684]: I0121 10:07:50.516510 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:07:50 crc kubenswrapper[4684]: E0121 10:07:50.516852 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6vjwl_openshift-ovn-kubernetes(8dac888b-051f-405a-8c23-60c205d2aecc)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" Jan 21 10:07:51 crc kubenswrapper[4684]: I0121 10:07:51.513904 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:51 crc kubenswrapper[4684]: I0121 10:07:51.513950 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:51 crc kubenswrapper[4684]: E0121 10:07:51.514091 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:51 crc kubenswrapper[4684]: I0121 10:07:51.514191 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:51 crc kubenswrapper[4684]: E0121 10:07:51.514487 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:51 crc kubenswrapper[4684]: E0121 10:07:51.515101 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:52 crc kubenswrapper[4684]: I0121 10:07:52.514660 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:52 crc kubenswrapper[4684]: E0121 10:07:52.515840 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:53 crc kubenswrapper[4684]: I0121 10:07:53.514521 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:53 crc kubenswrapper[4684]: I0121 10:07:53.514639 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:53 crc kubenswrapper[4684]: E0121 10:07:53.514697 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:53 crc kubenswrapper[4684]: I0121 10:07:53.514801 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:53 crc kubenswrapper[4684]: E0121 10:07:53.514827 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:53 crc kubenswrapper[4684]: E0121 10:07:53.515023 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:54 crc kubenswrapper[4684]: I0121 10:07:54.514074 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:54 crc kubenswrapper[4684]: E0121 10:07:54.514331 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:55 crc kubenswrapper[4684]: I0121 10:07:55.514504 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:55 crc kubenswrapper[4684]: I0121 10:07:55.514588 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:55 crc kubenswrapper[4684]: I0121 10:07:55.514523 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:55 crc kubenswrapper[4684]: E0121 10:07:55.514712 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:55 crc kubenswrapper[4684]: E0121 10:07:55.514824 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:55 crc kubenswrapper[4684]: E0121 10:07:55.514948 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:56 crc kubenswrapper[4684]: I0121 10:07:56.513947 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:56 crc kubenswrapper[4684]: E0121 10:07:56.514152 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:57 crc kubenswrapper[4684]: I0121 10:07:57.513870 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:57 crc kubenswrapper[4684]: I0121 10:07:57.513932 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:57 crc kubenswrapper[4684]: I0121 10:07:57.513988 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:57 crc kubenswrapper[4684]: E0121 10:07:57.514301 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:07:57 crc kubenswrapper[4684]: E0121 10:07:57.514467 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:57 crc kubenswrapper[4684]: E0121 10:07:57.514564 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.190915 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/1.log" Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.191676 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/0.log" Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.191753 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e7ac4c6-b960-418c-b057-e55d95a213cd" containerID="ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d" exitCode=1 Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.191815 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerDied","Data":"ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d"} Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.191902 4684 scope.go:117] "RemoveContainer" containerID="a142969c69311491529406590e126d63b92f20dddf2fd84ec103b29b8620b857" Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.193209 4684 scope.go:117] "RemoveContainer" containerID="ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d" Jan 21 10:07:58 crc kubenswrapper[4684]: E0121 10:07:58.193595 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6jwd4_openshift-multus(1e7ac4c6-b960-418c-b057-e55d95a213cd)\"" pod="openshift-multus/multus-6jwd4" podUID="1e7ac4c6-b960-418c-b057-e55d95a213cd" Jan 21 10:07:58 crc kubenswrapper[4684]: I0121 10:07:58.514016 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:07:58 crc kubenswrapper[4684]: E0121 10:07:58.514435 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:07:59 crc kubenswrapper[4684]: I0121 10:07:59.197772 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/1.log" Jan 21 10:07:59 crc kubenswrapper[4684]: I0121 10:07:59.513709 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:07:59 crc kubenswrapper[4684]: E0121 10:07:59.513850 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:07:59 crc kubenswrapper[4684]: I0121 10:07:59.513916 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:07:59 crc kubenswrapper[4684]: I0121 10:07:59.513991 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:07:59 crc kubenswrapper[4684]: E0121 10:07:59.514252 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:07:59 crc kubenswrapper[4684]: E0121 10:07:59.514455 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:00 crc kubenswrapper[4684]: I0121 10:08:00.513944 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:00 crc kubenswrapper[4684]: E0121 10:08:00.514206 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:01 crc kubenswrapper[4684]: I0121 10:08:01.514257 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:01 crc kubenswrapper[4684]: E0121 10:08:01.514503 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:08:01 crc kubenswrapper[4684]: I0121 10:08:01.514536 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:01 crc kubenswrapper[4684]: I0121 10:08:01.514747 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:01 crc kubenswrapper[4684]: E0121 10:08:01.515075 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:08:01 crc kubenswrapper[4684]: E0121 10:08:01.515307 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:02 crc kubenswrapper[4684]: E0121 10:08:02.509226 4684 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 10:08:02 crc kubenswrapper[4684]: I0121 10:08:02.513653 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:02 crc kubenswrapper[4684]: E0121 10:08:02.515271 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:02 crc kubenswrapper[4684]: E0121 10:08:02.650147 4684 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 10:08:03 crc kubenswrapper[4684]: I0121 10:08:03.514238 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:03 crc kubenswrapper[4684]: I0121 10:08:03.514307 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:03 crc kubenswrapper[4684]: I0121 10:08:03.514409 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:03 crc kubenswrapper[4684]: E0121 10:08:03.514523 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:03 crc kubenswrapper[4684]: E0121 10:08:03.514659 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:08:03 crc kubenswrapper[4684]: E0121 10:08:03.514780 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:08:04 crc kubenswrapper[4684]: I0121 10:08:04.514581 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:04 crc kubenswrapper[4684]: I0121 10:08:04.515533 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:08:04 crc kubenswrapper[4684]: E0121 10:08:04.515770 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.223855 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/3.log" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.226663 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerStarted","Data":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.227202 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.264967 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podStartSLOduration=103.26494728 podStartE2EDuration="1m43.26494728s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:05.264702651 +0000 UTC m=+123.022785668" watchObservedRunningTime="2026-01-21 10:08:05.26494728 +0000 UTC m=+123.023030247" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.380074 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7wzh7"] Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.380427 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:05 crc kubenswrapper[4684]: E0121 10:08:05.380930 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.513599 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.513615 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:05 crc kubenswrapper[4684]: E0121 10:08:05.513753 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:05 crc kubenswrapper[4684]: I0121 10:08:05.513802 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:05 crc kubenswrapper[4684]: E0121 10:08:05.513831 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:08:05 crc kubenswrapper[4684]: E0121 10:08:05.513968 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:08:07 crc kubenswrapper[4684]: I0121 10:08:07.514417 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:07 crc kubenswrapper[4684]: I0121 10:08:07.514441 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:07 crc kubenswrapper[4684]: E0121 10:08:07.515296 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:07 crc kubenswrapper[4684]: I0121 10:08:07.514441 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:07 crc kubenswrapper[4684]: I0121 10:08:07.514475 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:07 crc kubenswrapper[4684]: E0121 10:08:07.514981 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:08:07 crc kubenswrapper[4684]: E0121 10:08:07.515491 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:08:07 crc kubenswrapper[4684]: E0121 10:08:07.515605 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:07 crc kubenswrapper[4684]: E0121 10:08:07.652161 4684 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 10:08:08 crc kubenswrapper[4684]: I0121 10:08:08.514805 4684 scope.go:117] "RemoveContainer" containerID="ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d" Jan 21 10:08:09 crc kubenswrapper[4684]: I0121 10:08:09.244125 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/1.log" Jan 21 10:08:09 crc kubenswrapper[4684]: I0121 10:08:09.244789 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerStarted","Data":"3da3e62c432812c504b13fff6b927c1f9c0e2d04accb6e450bd21504262c7eaf"} Jan 21 10:08:09 crc kubenswrapper[4684]: I0121 10:08:09.514323 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:09 crc kubenswrapper[4684]: I0121 10:08:09.514392 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:09 crc kubenswrapper[4684]: I0121 10:08:09.514323 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:09 crc kubenswrapper[4684]: I0121 10:08:09.514340 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:09 crc kubenswrapper[4684]: E0121 10:08:09.514531 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:08:09 crc kubenswrapper[4684]: E0121 10:08:09.514575 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:09 crc kubenswrapper[4684]: E0121 10:08:09.514820 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:09 crc kubenswrapper[4684]: E0121 10:08:09.514913 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:08:11 crc kubenswrapper[4684]: I0121 10:08:11.514312 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:11 crc kubenswrapper[4684]: I0121 10:08:11.514397 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:11 crc kubenswrapper[4684]: E0121 10:08:11.514515 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7wzh7" podUID="49971ee3-e56a-4d50-8fc5-231bdcfc92d5" Jan 21 10:08:11 crc kubenswrapper[4684]: I0121 10:08:11.514592 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:11 crc kubenswrapper[4684]: I0121 10:08:11.514603 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:11 crc kubenswrapper[4684]: E0121 10:08:11.514839 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:08:11 crc kubenswrapper[4684]: E0121 10:08:11.515009 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:08:11 crc kubenswrapper[4684]: E0121 10:08:11.515126 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.514150 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.514151 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.514196 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.515012 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.518352 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.518707 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.518898 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.519097 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.519393 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:08:13 crc kubenswrapper[4684]: I0121 10:08:13.520196 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.416644 4684 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.463007 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.463557 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.468535 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.469130 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.471340 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.473398 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.473746 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.474231 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.475256 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.475734 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.475847 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.473999 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.479498 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.481893 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.482123 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.482911 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.493451 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5rnkx"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.493867 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.493873 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.494113 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.494579 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jmrsp"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.495301 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.495638 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.496450 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.498695 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.499194 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5wsq8"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.499568 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-plndh"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.499931 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.500510 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.501739 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.502664 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.503297 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q9wnn"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.503650 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.503711 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b5z5t"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504110 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504195 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504276 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504308 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504614 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504277 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504704 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.504832 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.505610 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.506163 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.506876 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.508072 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.508763 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.509215 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jgwzk"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.509578 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.510033 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.510481 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.511940 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.512097 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.512224 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.512319 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.512471 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.522647 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x6zks"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.523347 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.523399 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.523512 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.527936 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.528216 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.528451 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.528823 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jmrsp"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.542353 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.542643 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.542810 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.543600 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.544250 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.544533 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.544901 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.545643 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.546226 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.546554 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tv48v"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.553612 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.554966 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.557771 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.563581 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.564012 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.565594 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.565907 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.567872 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.568532 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.569253 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.569550 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.569669 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.569773 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.570191 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.570382 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.570609 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.570872 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.572665 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575562 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575616 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d4c83-c131-4dd3-b20f-f38faad79236-config\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575674 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk2tn\" (UniqueName: \"kubernetes.io/projected/89f98230-b1df-4e65-9163-a82223eb2cf5-kube-api-access-gk2tn\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575704 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddn2\" (UniqueName: \"kubernetes.io/projected/e11d4c83-c131-4dd3-b20f-f38faad79236-kube-api-access-nddn2\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575761 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e11d4c83-c131-4dd3-b20f-f38faad79236-machine-approver-tls\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575795 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-config\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575826 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km8vb\" (UniqueName: \"kubernetes.io/projected/5b70a09c-424c-4317-a719-e0dbb6eefe1b-kube-api-access-km8vb\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575867 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f98230-b1df-4e65-9163-a82223eb2cf5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575935 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e11d4c83-c131-4dd3-b20f-f38faad79236-auth-proxy-config\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.575968 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b70a09c-424c-4317-a719-e0dbb6eefe1b-serving-cert\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.576011 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-client-ca\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.576050 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f98230-b1df-4e65-9163-a82223eb2cf5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.577189 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.577354 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.577867 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.577946 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.577963 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.578170 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.578280 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.578347 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.578515 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.578904 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.580710 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.584556 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.588722 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.588933 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.589386 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.590233 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.592163 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.592458 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.595848 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.595973 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596136 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596221 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596301 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596442 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596526 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596661 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596710 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.596975 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.599325 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.599673 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.600276 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.601074 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.601282 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602465 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602498 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602553 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602721 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602753 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602798 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602892 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.602990 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.603963 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.605514 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.605711 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.606326 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.606516 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.606624 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.607350 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.608294 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.608491 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.610270 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.616413 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.617430 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.618214 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x97lq"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.634629 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.636441 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.636861 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8k2vm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.637790 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.638943 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.640398 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.644953 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.646079 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.648874 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.649917 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.653692 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.653810 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.653941 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.654171 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.660930 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.661665 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.661985 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wfjd"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.662783 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.665111 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.665655 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.666411 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.667284 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.667635 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.667842 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.669724 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.670409 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.672932 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5wsq8"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.674091 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-6g6vs"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.674820 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.676573 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.676851 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-service-ca-bundle\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.676912 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-encryption-config\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.676944 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/634c4722-8d7d-463a-9844-8213116a4ce7-trusted-ca\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.676963 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42c27350-86e3-4a02-9194-5dd24c297a12-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.676986 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a16e132-7543-4642-bd55-b3abf288e009-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677004 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677026 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-trusted-ca-bundle\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677044 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrfp\" (UniqueName: \"kubernetes.io/projected/71784ae3-e88b-48b3-94e4-f312f673084d-kube-api-access-4mrfp\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677063 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677082 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-config\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677114 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk2tn\" (UniqueName: \"kubernetes.io/projected/89f98230-b1df-4e65-9163-a82223eb2cf5-kube-api-access-gk2tn\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677138 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/638fa066-691d-4c65-9548-dc4b2fd35640-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677156 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677176 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677199 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-audit-policies\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677215 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-ca\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677229 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677231 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c27350-86e3-4a02-9194-5dd24c297a12-config\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677511 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-service-ca\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677548 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmg4\" (UniqueName: \"kubernetes.io/projected/638fa066-691d-4c65-9548-dc4b2fd35640-kube-api-access-fnmg4\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677578 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47636a6f-dad5-46f0-b080-6d4c37652860-serving-cert\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677605 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42c27350-86e3-4a02-9194-5dd24c297a12-images\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677632 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-audit-policies\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677654 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71784ae3-e88b-48b3-94e4-f312f673084d-serving-cert\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677695 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqspz\" (UniqueName: \"kubernetes.io/projected/47636a6f-dad5-46f0-b080-6d4c37652860-kube-api-access-fqspz\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677718 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/728bf40a-9dfc-4850-a3cf-2b506c0fe68c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-98cfq\" (UID: \"728bf40a-9dfc-4850-a3cf-2b506c0fe68c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677742 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ebaa53-a616-4d3f-a69f-19da113978c3-serving-cert\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-config\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677808 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af694d93-1240-4a87-a2fe-153bb2401143-audit-dir\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677850 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bt4n\" (UniqueName: \"kubernetes.io/projected/634c4722-8d7d-463a-9844-8213116a4ce7-kube-api-access-8bt4n\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677913 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-serving-cert\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-serving-cert\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.677971 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-config\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678001 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwwr9\" (UniqueName: \"kubernetes.io/projected/728bf40a-9dfc-4850-a3cf-2b506c0fe68c-kube-api-access-kwwr9\") pod \"cluster-samples-operator-665b6dd947-98cfq\" (UID: \"728bf40a-9dfc-4850-a3cf-2b506c0fe68c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678032 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-encryption-config\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678059 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpbtv\" (UniqueName: \"kubernetes.io/projected/0a7f0592-7c89-496c-acb0-3ae031dbffb1-kube-api-access-zpbtv\") pod \"downloads-7954f5f757-5rnkx\" (UID: \"0a7f0592-7c89-496c-acb0-3ae031dbffb1\") " pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678080 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983911ed-39aa-4717-bc5b-a03c1a5ab47d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678114 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/638fa066-691d-4c65-9548-dc4b2fd35640-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678134 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678166 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-service-ca\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678185 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634c4722-8d7d-463a-9844-8213116a4ce7-config\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678217 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b70a09c-424c-4317-a719-e0dbb6eefe1b-serving-cert\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678314 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678416 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678459 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47636a6f-dad5-46f0-b080-6d4c37652860-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678496 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-audit\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678600 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-config\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678644 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9379cfa5-977e-46d6-b841-f1b8c99a6c75-node-pullsecrets\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678672 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zrw2\" (UniqueName: \"kubernetes.io/projected/9379cfa5-977e-46d6-b841-f1b8c99a6c75-kube-api-access-9zrw2\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678717 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678796 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/634c4722-8d7d-463a-9844-8213116a4ce7-serving-cert\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678835 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f98230-b1df-4e65-9163-a82223eb2cf5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678865 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.678981 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2g9f\" (UniqueName: \"kubernetes.io/projected/af694d93-1240-4a87-a2fe-153bb2401143-kube-api-access-q2g9f\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679051 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d4c83-c131-4dd3-b20f-f38faad79236-config\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679150 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddn2\" (UniqueName: \"kubernetes.io/projected/e11d4c83-c131-4dd3-b20f-f38faad79236-kube-api-access-nddn2\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679196 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-etcd-client\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679244 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983911ed-39aa-4717-bc5b-a03c1a5ab47d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679394 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-oauth-serving-cert\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679429 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679454 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7bad9ca-8600-4221-b33c-89e055ff177d-audit-dir\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679479 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e11d4c83-c131-4dd3-b20f-f38faad79236-machine-approver-tls\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679500 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54m6\" (UniqueName: \"kubernetes.io/projected/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-kube-api-access-n54m6\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679538 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679569 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwrjc\" (UniqueName: \"kubernetes.io/projected/2729c786-b7ea-4398-8023-bc7be080c44f-kube-api-access-qwrjc\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679597 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679631 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-config\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679653 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km8vb\" (UniqueName: \"kubernetes.io/projected/5b70a09c-424c-4317-a719-e0dbb6eefe1b-kube-api-access-km8vb\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679675 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-oauth-config\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679692 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-etcd-client\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679851 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f98230-b1df-4e65-9163-a82223eb2cf5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679878 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/638fa066-691d-4c65-9548-dc4b2fd35640-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679905 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knh99\" (UniqueName: \"kubernetes.io/projected/e9ebaa53-a616-4d3f-a69f-19da113978c3-kube-api-access-knh99\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679919 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11d4c83-c131-4dd3-b20f-f38faad79236-config\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.679929 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dd7x\" (UniqueName: \"kubernetes.io/projected/983911ed-39aa-4717-bc5b-a03c1a5ab47d-kube-api-access-9dd7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680029 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9379cfa5-977e-46d6-b841-f1b8c99a6c75-audit-dir\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680075 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680128 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-client\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680173 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a16e132-7543-4642-bd55-b3abf288e009-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680239 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-serving-cert\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680274 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jll\" (UniqueName: \"kubernetes.io/projected/e7bad9ca-8600-4221-b33c-89e055ff177d-kube-api-access-k8jll\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680307 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-image-import-ca\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680337 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680411 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-client-ca\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680445 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e11d4c83-c131-4dd3-b20f-f38faad79236-auth-proxy-config\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680474 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680500 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6z7q\" (UniqueName: \"kubernetes.io/projected/42c27350-86e3-4a02-9194-5dd24c297a12-kube-api-access-h6z7q\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680536 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a16e132-7543-4642-bd55-b3abf288e009-config\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680565 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-config\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680593 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-client-ca\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680613 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-etcd-serving-ca\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680635 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680663 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2729c786-b7ea-4398-8023-bc7be080c44f-serving-cert\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.680640 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f98230-b1df-4e65-9163-a82223eb2cf5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.681322 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e11d4c83-c131-4dd3-b20f-f38faad79236-auth-proxy-config\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.681383 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-config\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.681439 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-client-ca\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.682651 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.683510 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.684026 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.684124 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.684493 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.685790 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dg7k7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.686532 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f98230-b1df-4e65-9163-a82223eb2cf5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.686753 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.687082 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e11d4c83-c131-4dd3-b20f-f38faad79236-machine-approver-tls\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.687523 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5rnkx"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.687647 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.688197 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.689349 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.689706 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jgwzk"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.690511 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b70a09c-424c-4317-a719-e0dbb6eefe1b-serving-cert\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.691331 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.692288 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.692522 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5hr7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.692994 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.693876 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tv48v"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.695211 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q9wnn"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.696528 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.697759 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.698961 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b5z5t"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.701196 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.701775 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.702818 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.704170 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.705453 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.706692 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-plndh"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.706802 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.707997 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5mkm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.709280 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vnnj7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.709568 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.710113 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.710437 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.711867 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8k2vm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.715445 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dg7k7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.719630 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.719699 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x97lq"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.720964 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x6zks"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.722989 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cs67x"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.728613 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wfjd"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.728795 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.728805 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.734397 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5hr7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.736395 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.738007 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.739538 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mnmmn"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.740744 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.741212 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.742659 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.744045 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5mkm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.745452 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.746323 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.746882 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.748175 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mnmmn"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.749630 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.751038 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.752822 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vnnj7"] Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.766885 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781324 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-encryption-config\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781410 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpbtv\" (UniqueName: \"kubernetes.io/projected/0a7f0592-7c89-496c-acb0-3ae031dbffb1-kube-api-access-zpbtv\") pod \"downloads-7954f5f757-5rnkx\" (UID: \"0a7f0592-7c89-496c-acb0-3ae031dbffb1\") " pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781431 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/638fa066-691d-4c65-9548-dc4b2fd35640-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781453 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781485 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781506 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781533 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-config\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781552 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-audit\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781574 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9nf\" (UniqueName: \"kubernetes.io/projected/6f6dc594-89a5-4c0f-9132-296c2e028c7b-kube-api-access-gf9nf\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781597 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwn2\" (UniqueName: \"kubernetes.io/projected/510de763-cfdd-4464-a9d6-d061ad7b88bf-kube-api-access-vxwn2\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781693 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781728 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781752 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781777 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781808 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/510de763-cfdd-4464-a9d6-d061ad7b88bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781830 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwrjc\" (UniqueName: \"kubernetes.io/projected/2729c786-b7ea-4398-8023-bc7be080c44f-kube-api-access-qwrjc\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781860 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781886 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/839bb9d6-4528-484e-9e12-ea749b5e177c-apiservice-cert\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781911 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/638fa066-691d-4c65-9548-dc4b2fd35640-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781931 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dd7x\" (UniqueName: \"kubernetes.io/projected/983911ed-39aa-4717-bc5b-a03c1a5ab47d-kube-api-access-9dd7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9379cfa5-977e-46d6-b841-f1b8c99a6c75-audit-dir\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781970 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-metrics-certs\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.781993 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782013 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-cabundle\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782033 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8jll\" (UniqueName: \"kubernetes.io/projected/e7bad9ca-8600-4221-b33c-89e055ff177d-kube-api-access-k8jll\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782051 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-client-ca\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782069 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6jr\" (UniqueName: \"kubernetes.io/projected/9ffd57a7-387f-48ec-9643-a67d392ce9c0-kube-api-access-9d6jr\") pod \"migrator-59844c95c7-65tjm\" (UID: \"9ffd57a7-387f-48ec-9643-a67d392ce9c0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782091 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6z7q\" (UniqueName: \"kubernetes.io/projected/42c27350-86e3-4a02-9194-5dd24c297a12-kube-api-access-h6z7q\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782111 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-etcd-serving-ca\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782133 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6l5m\" (UniqueName: \"kubernetes.io/projected/7f11428f-574a-480a-b8af-b9a65d9de720-kube-api-access-s6l5m\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782155 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-encryption-config\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782176 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/634c4722-8d7d-463a-9844-8213116a4ce7-trusted-ca\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782196 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42c27350-86e3-4a02-9194-5dd24c297a12-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782215 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wzmf\" (UniqueName: \"kubernetes.io/projected/839bb9d6-4528-484e-9e12-ea749b5e177c-kube-api-access-2wzmf\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782236 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a16e132-7543-4642-bd55-b3abf288e009-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782261 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782284 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-trusted-ca-bundle\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782321 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-audit-policies\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782341 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-ca\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782379 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9sw\" (UniqueName: \"kubernetes.io/projected/e6097625-6479-4b86-b888-78d72d1c7089-kube-api-access-wq9sw\") pod \"dns-operator-744455d44c-x97lq\" (UID: \"e6097625-6479-4b86-b888-78d72d1c7089\") " pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782399 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-service-ca\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782419 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmg4\" (UniqueName: \"kubernetes.io/projected/638fa066-691d-4c65-9548-dc4b2fd35640-kube-api-access-fnmg4\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782446 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47636a6f-dad5-46f0-b080-6d4c37652860-serving-cert\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782469 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42c27350-86e3-4a02-9194-5dd24c297a12-images\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782493 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-audit-policies\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782519 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71784ae3-e88b-48b3-94e4-f312f673084d-serving-cert\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782561 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqspz\" (UniqueName: \"kubernetes.io/projected/47636a6f-dad5-46f0-b080-6d4c37652860-kube-api-access-fqspz\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783049 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-config\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783172 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783346 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9379cfa5-977e-46d6-b841-f1b8c99a6c75-audit-dir\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783381 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-client-ca\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783405 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/728bf40a-9dfc-4850-a3cf-2b506c0fe68c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-98cfq\" (UID: \"728bf40a-9dfc-4850-a3cf-2b506c0fe68c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783501 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.782413 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-audit\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783623 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ebaa53-a616-4d3f-a69f-19da113978c3-serving-cert\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783673 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/638fa066-691d-4c65-9548-dc4b2fd35640-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783674 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-config\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783726 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af694d93-1240-4a87-a2fe-153bb2401143-audit-dir\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783749 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-serving-cert\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783770 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983911ed-39aa-4717-bc5b-a03c1a5ab47d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783795 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9sw\" (UniqueName: \"kubernetes.io/projected/c5367e25-e2b2-4280-8e56-ad67c088c382-kube-api-access-sp9sw\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783815 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6097625-6479-4b86-b888-78d72d1c7089-metrics-tls\") pod \"dns-operator-744455d44c-x97lq\" (UID: \"e6097625-6479-4b86-b888-78d72d1c7089\") " pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783840 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783863 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-service-ca\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783883 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634c4722-8d7d-463a-9844-8213116a4ce7-config\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.783907 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47636a6f-dad5-46f0-b080-6d4c37652860-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784031 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9379cfa5-977e-46d6-b841-f1b8c99a6c75-node-pullsecrets\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784063 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zrw2\" (UniqueName: \"kubernetes.io/projected/9379cfa5-977e-46d6-b841-f1b8c99a6c75-kube-api-access-9zrw2\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784089 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-stats-auth\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784109 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/839bb9d6-4528-484e-9e12-ea749b5e177c-tmpfs\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784133 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/839bb9d6-4528-484e-9e12-ea749b5e177c-webhook-cert\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784166 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2g9f\" (UniqueName: \"kubernetes.io/projected/af694d93-1240-4a87-a2fe-153bb2401143-kube-api-access-q2g9f\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784190 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/634c4722-8d7d-463a-9844-8213116a4ce7-serving-cert\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784212 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0eb2234-baac-4251-8bf4-631c7407c09f-serving-cert\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784245 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-etcd-client\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983911ed-39aa-4717-bc5b-a03c1a5ab47d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784293 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-oauth-serving-cert\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.784319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785045 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7bad9ca-8600-4221-b33c-89e055ff177d-audit-dir\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785054 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-trusted-ca-bundle\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785087 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54m6\" (UniqueName: \"kubernetes.io/projected/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-kube-api-access-n54m6\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785117 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785145 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gctt\" (UniqueName: \"kubernetes.io/projected/50a26307-43c9-4325-b992-91cac18db66c-kube-api-access-8gctt\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785170 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11428f-574a-480a-b8af-b9a65d9de720-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785238 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-oauth-config\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785264 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-etcd-client\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785291 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knh99\" (UniqueName: \"kubernetes.io/projected/e9ebaa53-a616-4d3f-a69f-19da113978c3-kube-api-access-knh99\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785309 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-config\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785319 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479w4\" (UniqueName: \"kubernetes.io/projected/d0eb2234-baac-4251-8bf4-631c7407c09f-kube-api-access-479w4\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/510de763-cfdd-4464-a9d6-d061ad7b88bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785410 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7bad9ca-8600-4221-b33c-89e055ff177d-audit-dir\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785417 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-client\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785491 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a16e132-7543-4642-bd55-b3abf288e009-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785546 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-serving-cert\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785578 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-image-import-ca\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785611 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785641 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eb2234-baac-4251-8bf4-631c7407c09f-config\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785666 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.785866 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.786273 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.786631 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.788085 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/47636a6f-dad5-46f0-b080-6d4c37652860-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.788138 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af694d93-1240-4a87-a2fe-153bb2401143-audit-dir\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.788792 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-client\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.789021 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-etcd-serving-ca\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.789269 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ebaa53-a616-4d3f-a69f-19da113978c3-serving-cert\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.789378 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-encryption-config\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.789960 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-audit-policies\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.789965 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42c27350-86e3-4a02-9194-5dd24c297a12-images\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.790878 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-service-ca\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.790938 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/728bf40a-9dfc-4850-a3cf-2b506c0fe68c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-98cfq\" (UID: \"728bf40a-9dfc-4850-a3cf-2b506c0fe68c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791011 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-ca\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791098 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-image-import-ca\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791110 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/634c4722-8d7d-463a-9844-8213116a4ce7-config\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791216 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a16e132-7543-4642-bd55-b3abf288e009-config\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791276 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-config\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791591 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791656 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2729c786-b7ea-4398-8023-bc7be080c44f-serving-cert\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791688 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-service-ca-bundle\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791709 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42c27350-86e3-4a02-9194-5dd24c297a12-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791722 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-default-certificate\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791765 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrfp\" (UniqueName: \"kubernetes.io/projected/71784ae3-e88b-48b3-94e4-f312f673084d-kube-api-access-4mrfp\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791306 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.791878 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-oauth-serving-cert\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792008 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792042 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/510de763-cfdd-4464-a9d6-d061ad7b88bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792065 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/638fa066-691d-4c65-9548-dc4b2fd35640-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792085 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792110 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-config\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792126 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792134 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a26307-43c9-4325-b992-91cac18db66c-service-ca-bundle\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792202 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792233 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c27350-86e3-4a02-9194-5dd24c297a12-config\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792406 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792785 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.792961 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793741 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-srv-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793769 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793826 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bt4n\" (UniqueName: \"kubernetes.io/projected/634c4722-8d7d-463a-9844-8213116a4ce7-kube-api-access-8bt4n\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793848 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-key\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793880 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-serving-cert\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793937 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-config\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.793961 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwwr9\" (UniqueName: \"kubernetes.io/projected/728bf40a-9dfc-4850-a3cf-2b506c0fe68c-kube-api-access-kwwr9\") pod \"cluster-samples-operator-665b6dd947-98cfq\" (UID: \"728bf40a-9dfc-4850-a3cf-2b506c0fe68c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.794012 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983911ed-39aa-4717-bc5b-a03c1a5ab47d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.794696 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-config\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.794791 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c27350-86e3-4a02-9194-5dd24c297a12-config\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.795045 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-encryption-config\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.795066 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-service-ca-bundle\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.795154 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983911ed-39aa-4717-bc5b-a03c1a5ab47d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.795424 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/634c4722-8d7d-463a-9844-8213116a4ce7-trusted-ca\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.795591 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-serving-cert\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.795813 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9379cfa5-977e-46d6-b841-f1b8c99a6c75-etcd-client\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.796100 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-config\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.796287 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71784ae3-e88b-48b3-94e4-f312f673084d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.796339 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.796409 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9379cfa5-977e-46d6-b841-f1b8c99a6c75-config\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.796831 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e7bad9ca-8600-4221-b33c-89e055ff177d-audit-policies\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.796934 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-serving-cert\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.797487 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.797600 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/634c4722-8d7d-463a-9844-8213116a4ce7-serving-cert\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.789689 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9379cfa5-977e-46d6-b841-f1b8c99a6c75-node-pullsecrets\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799060 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799066 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799184 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/638fa066-691d-4c65-9548-dc4b2fd35640-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799446 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799589 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-serving-cert\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799887 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.799903 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2729c786-b7ea-4398-8023-bc7be080c44f-serving-cert\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.801177 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71784ae3-e88b-48b3-94e4-f312f673084d-serving-cert\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.801551 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47636a6f-dad5-46f0-b080-6d4c37652860-serving-cert\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.801699 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7bad9ca-8600-4221-b33c-89e055ff177d-etcd-client\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.801813 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-console-oauth-config\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.802333 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.802866 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.802992 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2729c786-b7ea-4398-8023-bc7be080c44f-etcd-service-ca\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.806227 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.827153 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.866238 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.870413 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a16e132-7543-4642-bd55-b3abf288e009-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.886360 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.894820 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/510de763-cfdd-4464-a9d6-d061ad7b88bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.894886 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/839bb9d6-4528-484e-9e12-ea749b5e177c-apiservice-cert\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.894930 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-metrics-certs\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.894954 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-cabundle\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.894983 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6jr\" (UniqueName: \"kubernetes.io/projected/9ffd57a7-387f-48ec-9643-a67d392ce9c0-kube-api-access-9d6jr\") pod \"migrator-59844c95c7-65tjm\" (UID: \"9ffd57a7-387f-48ec-9643-a67d392ce9c0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895021 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6l5m\" (UniqueName: \"kubernetes.io/projected/7f11428f-574a-480a-b8af-b9a65d9de720-kube-api-access-s6l5m\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895049 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wzmf\" (UniqueName: \"kubernetes.io/projected/839bb9d6-4528-484e-9e12-ea749b5e177c-kube-api-access-2wzmf\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895104 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9sw\" (UniqueName: \"kubernetes.io/projected/e6097625-6479-4b86-b888-78d72d1c7089-kube-api-access-wq9sw\") pod \"dns-operator-744455d44c-x97lq\" (UID: \"e6097625-6479-4b86-b888-78d72d1c7089\") " pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895159 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp9sw\" (UniqueName: \"kubernetes.io/projected/c5367e25-e2b2-4280-8e56-ad67c088c382-kube-api-access-sp9sw\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895183 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6097625-6479-4b86-b888-78d72d1c7089-metrics-tls\") pod \"dns-operator-744455d44c-x97lq\" (UID: \"e6097625-6479-4b86-b888-78d72d1c7089\") " pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895285 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-stats-auth\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895314 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/839bb9d6-4528-484e-9e12-ea749b5e177c-tmpfs\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895337 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/839bb9d6-4528-484e-9e12-ea749b5e177c-webhook-cert\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895391 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0eb2234-baac-4251-8bf4-631c7407c09f-serving-cert\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895434 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gctt\" (UniqueName: \"kubernetes.io/projected/50a26307-43c9-4325-b992-91cac18db66c-kube-api-access-8gctt\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895458 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11428f-574a-480a-b8af-b9a65d9de720-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895502 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479w4\" (UniqueName: \"kubernetes.io/projected/d0eb2234-baac-4251-8bf4-631c7407c09f-kube-api-access-479w4\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895526 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/510de763-cfdd-4464-a9d6-d061ad7b88bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895560 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eb2234-baac-4251-8bf4-631c7407c09f-config\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895601 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-default-certificate\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895639 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/510de763-cfdd-4464-a9d6-d061ad7b88bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895665 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a26307-43c9-4325-b992-91cac18db66c-service-ca-bundle\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895699 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-srv-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895736 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-key\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895781 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895782 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/839bb9d6-4528-484e-9e12-ea749b5e177c-tmpfs\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895811 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf9nf\" (UniqueName: \"kubernetes.io/projected/6f6dc594-89a5-4c0f-9132-296c2e028c7b-kube-api-access-gf9nf\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.895842 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwn2\" (UniqueName: \"kubernetes.io/projected/510de763-cfdd-4464-a9d6-d061ad7b88bf-kube-api-access-vxwn2\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.905699 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.913132 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a16e132-7543-4642-bd55-b3abf288e009-config\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.925742 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.946783 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.966035 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 10:08:18 crc kubenswrapper[4684]: I0121 10:08:18.987938 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.006427 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.026108 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.046230 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.067831 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.086172 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.106031 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.126450 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.147545 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.160861 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e6097625-6479-4b86-b888-78d72d1c7089-metrics-tls\") pod \"dns-operator-744455d44c-x97lq\" (UID: \"e6097625-6479-4b86-b888-78d72d1c7089\") " pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.166031 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.186647 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.205609 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.226091 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.246266 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.265952 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.285515 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.306606 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.326529 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.346465 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.366287 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.381127 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/839bb9d6-4528-484e-9e12-ea749b5e177c-apiservice-cert\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.382054 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/839bb9d6-4528-484e-9e12-ea749b5e177c-webhook-cert\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.387641 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.406637 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.426582 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.446970 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.475817 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.486765 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.505879 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.525775 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.545879 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.565996 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.577349 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0eb2234-baac-4251-8bf4-631c7407c09f-config\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.585298 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.606883 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.621502 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0eb2234-baac-4251-8bf4-631c7407c09f-serving-cert\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.626939 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.646212 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.666633 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.684961 4684 request.go:700] Waited for 1.014345789s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.686573 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.707646 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.727454 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.745910 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.760410 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-stats-auth\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.766884 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.776875 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50a26307-43c9-4325-b992-91cac18db66c-service-ca-bundle\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.787488 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.799846 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-metrics-certs\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.806694 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.827185 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.838924 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/50a26307-43c9-4325-b992-91cac18db66c-default-certificate\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.847129 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.866419 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.885498 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.895858 4684 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.895904 4684 secret.go:188] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.895854 4684 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.895986 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f11428f-574a-480a-b8af-b9a65d9de720-package-server-manager-serving-cert podName:7f11428f-574a-480a-b8af-b9a65d9de720 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.395955497 +0000 UTC m=+138.154038474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/7f11428f-574a-480a-b8af-b9a65d9de720-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-dcnrm" (UID: "7f11428f-574a-480a-b8af-b9a65d9de720") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896002 4684 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.895976 4684 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896043 4684 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896012 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/510de763-cfdd-4464-a9d6-d061ad7b88bf-metrics-tls podName:510de763-cfdd-4464-a9d6-d061ad7b88bf nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.396000408 +0000 UTC m=+138.154083385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/510de763-cfdd-4464-a9d6-d061ad7b88bf-metrics-tls") pod "ingress-operator-5b745b69d9-9x6qf" (UID: "510de763-cfdd-4464-a9d6-d061ad7b88bf") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.895869 4684 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896149 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/510de763-cfdd-4464-a9d6-d061ad7b88bf-trusted-ca podName:510de763-cfdd-4464-a9d6-d061ad7b88bf nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.396116023 +0000 UTC m=+138.154199190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/510de763-cfdd-4464-a9d6-d061ad7b88bf-trusted-ca") pod "ingress-operator-5b745b69d9-9x6qf" (UID: "510de763-cfdd-4464-a9d6-d061ad7b88bf") : failed to sync configmap cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896173 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-key podName:6f6dc594-89a5-4c0f-9132-296c2e028c7b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.396163064 +0000 UTC m=+138.154246031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-key") pod "service-ca-9c57cc56f-c5hr7" (UID: "6f6dc594-89a5-4c0f-9132-296c2e028c7b") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896196 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-srv-cert podName:c5367e25-e2b2-4280-8e56-ad67c088c382 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.396185765 +0000 UTC m=+138.154268942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-srv-cert") pod "olm-operator-6b444d44fb-v6fbz" (UID: "c5367e25-e2b2-4280-8e56-ad67c088c382") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896212 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-profile-collector-cert podName:c5367e25-e2b2-4280-8e56-ad67c088c382 nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.396204536 +0000 UTC m=+138.154287703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-profile-collector-cert") pod "olm-operator-6b444d44fb-v6fbz" (UID: "c5367e25-e2b2-4280-8e56-ad67c088c382") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: E0121 10:08:19.896230 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-cabundle podName:6f6dc594-89a5-4c0f-9132-296c2e028c7b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:20.396223307 +0000 UTC m=+138.154306494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-cabundle") pod "service-ca-9c57cc56f-c5hr7" (UID: "6f6dc594-89a5-4c0f-9132-296c2e028c7b") : failed to sync configmap cache: timed out waiting for the condition Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.922099 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk2tn\" (UniqueName: \"kubernetes.io/projected/89f98230-b1df-4e65-9163-a82223eb2cf5-kube-api-access-gk2tn\") pod \"openshift-apiserver-operator-796bbdcf4f-pxz9b\" (UID: \"89f98230-b1df-4e65-9163-a82223eb2cf5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.938145 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.947347 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.966152 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 10:08:19 crc kubenswrapper[4684]: I0121 10:08:19.987734 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.022576 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddn2\" (UniqueName: \"kubernetes.io/projected/e11d4c83-c131-4dd3-b20f-f38faad79236-kube-api-access-nddn2\") pod \"machine-approver-56656f9798-hz6v7\" (UID: \"e11d4c83-c131-4dd3-b20f-f38faad79236\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.043345 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.047913 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.051529 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km8vb\" (UniqueName: \"kubernetes.io/projected/5b70a09c-424c-4317-a719-e0dbb6eefe1b-kube-api-access-km8vb\") pod \"route-controller-manager-6576b87f9c-k9lpf\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.066726 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.093937 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.106468 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.126661 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.147218 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.167947 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.188832 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.213112 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.228441 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.246112 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.267802 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.286995 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.294586 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.307705 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.327052 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.346187 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.369769 4684 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.379546 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" event={"ID":"e11d4c83-c131-4dd3-b20f-f38faad79236","Type":"ContainerStarted","Data":"af868924f1253d085d88c574b9a6a574c792c286d927e4b5330a87aa5bef3b98"} Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.387693 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.406731 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421036 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11428f-574a-480a-b8af-b9a65d9de720-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421121 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/510de763-cfdd-4464-a9d6-d061ad7b88bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421189 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/510de763-cfdd-4464-a9d6-d061ad7b88bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421234 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-srv-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421273 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-key\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421321 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.421438 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-cabundle\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.422815 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-cabundle\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.423942 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/510de763-cfdd-4464-a9d6-d061ad7b88bf-trusted-ca\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.428150 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.428887 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-profile-collector-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.429066 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11428f-574a-480a-b8af-b9a65d9de720-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.430246 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c5367e25-e2b2-4280-8e56-ad67c088c382-srv-cert\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.430822 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/6f6dc594-89a5-4c0f-9132-296c2e028c7b-signing-key\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.437756 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/510de763-cfdd-4464-a9d6-d061ad7b88bf-metrics-tls\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.446270 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.463088 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b"] Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.466053 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf"] Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.469101 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 10:08:20 crc kubenswrapper[4684]: W0121 10:08:20.476467 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b70a09c_424c_4317_a719_e0dbb6eefe1b.slice/crio-020d0c6e69d052e4dadc85918b23254c0b38766c54db2dcffd6a961cb6d565a8 WatchSource:0}: Error finding container 020d0c6e69d052e4dadc85918b23254c0b38766c54db2dcffd6a961cb6d565a8: Status 404 returned error can't find the container with id 020d0c6e69d052e4dadc85918b23254c0b38766c54db2dcffd6a961cb6d565a8 Jan 21 10:08:20 crc kubenswrapper[4684]: W0121 10:08:20.477145 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89f98230_b1df_4e65_9163_a82223eb2cf5.slice/crio-86954c37071ba1efb34d704d2cc24f75dd828b13b72eee57c7e9f80adb30bd5c WatchSource:0}: Error finding container 86954c37071ba1efb34d704d2cc24f75dd828b13b72eee57c7e9f80adb30bd5c: Status 404 returned error can't find the container with id 86954c37071ba1efb34d704d2cc24f75dd828b13b72eee57c7e9f80adb30bd5c Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.486160 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.506543 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.527289 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.547097 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.565475 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.606505 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpbtv\" (UniqueName: \"kubernetes.io/projected/0a7f0592-7c89-496c-acb0-3ae031dbffb1-kube-api-access-zpbtv\") pod \"downloads-7954f5f757-5rnkx\" (UID: \"0a7f0592-7c89-496c-acb0-3ae031dbffb1\") " pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.620885 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/638fa066-691d-4c65-9548-dc4b2fd35640-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.628017 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.645333 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqspz\" (UniqueName: \"kubernetes.io/projected/47636a6f-dad5-46f0-b080-6d4c37652860-kube-api-access-fqspz\") pod \"openshift-config-operator-7777fb866f-rjpqz\" (UID: \"47636a6f-dad5-46f0-b080-6d4c37652860\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.666081 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwrjc\" (UniqueName: \"kubernetes.io/projected/2729c786-b7ea-4398-8023-bc7be080c44f-kube-api-access-qwrjc\") pod \"etcd-operator-b45778765-tv48v\" (UID: \"2729c786-b7ea-4398-8023-bc7be080c44f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.681706 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dd7x\" (UniqueName: \"kubernetes.io/projected/983911ed-39aa-4717-bc5b-a03c1a5ab47d-kube-api-access-9dd7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-t4gw6\" (UID: \"983911ed-39aa-4717-bc5b-a03c1a5ab47d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.693055 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.704560 4684 request.go:700] Waited for 1.916809777s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.710917 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54m6\" (UniqueName: \"kubernetes.io/projected/d1cc5379-f931-4e88-b422-4ecc9e33e2a0-kube-api-access-n54m6\") pod \"console-f9d7485db-x6zks\" (UID: \"d1cc5379-f931-4e88-b422-4ecc9e33e2a0\") " pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.725718 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6z7q\" (UniqueName: \"kubernetes.io/projected/42c27350-86e3-4a02-9194-5dd24c297a12-kube-api-access-h6z7q\") pod \"machine-api-operator-5694c8668f-q9wnn\" (UID: \"42c27350-86e3-4a02-9194-5dd24c297a12\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.744992 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zrw2\" (UniqueName: \"kubernetes.io/projected/9379cfa5-977e-46d6-b841-f1b8c99a6c75-kube-api-access-9zrw2\") pod \"apiserver-76f77b778f-b5z5t\" (UID: \"9379cfa5-977e-46d6-b841-f1b8c99a6c75\") " pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.766536 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.771504 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.791277 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.792199 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knh99\" (UniqueName: \"kubernetes.io/projected/e9ebaa53-a616-4d3f-a69f-19da113978c3-kube-api-access-knh99\") pod \"controller-manager-879f6c89f-jmrsp\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.798401 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmg4\" (UniqueName: \"kubernetes.io/projected/638fa066-691d-4c65-9548-dc4b2fd35640-kube-api-access-fnmg4\") pod \"cluster-image-registry-operator-dc59b4c8b-jpqvb\" (UID: \"638fa066-691d-4c65-9548-dc4b2fd35640\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.810302 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a16e132-7543-4642-bd55-b3abf288e009-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7khjp\" (UID: \"2a16e132-7543-4642-bd55-b3abf288e009\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.824345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8jll\" (UniqueName: \"kubernetes.io/projected/e7bad9ca-8600-4221-b33c-89e055ff177d-kube-api-access-k8jll\") pod \"apiserver-7bbb656c7d-qsl5r\" (UID: \"e7bad9ca-8600-4221-b33c-89e055ff177d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.832555 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.837933 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.842555 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrfp\" (UniqueName: \"kubernetes.io/projected/71784ae3-e88b-48b3-94e4-f312f673084d-kube-api-access-4mrfp\") pod \"authentication-operator-69f744f599-5wsq8\" (UID: \"71784ae3-e88b-48b3-94e4-f312f673084d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.842599 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5rnkx"] Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.854696 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.860223 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2g9f\" (UniqueName: \"kubernetes.io/projected/af694d93-1240-4a87-a2fe-153bb2401143-kube-api-access-q2g9f\") pod \"oauth-openshift-558db77b4-plndh\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.883084 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b3f6c72-88f5-4e25-a2dc-fbc18e402afc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lvmjt\" (UID: \"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.901625 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwwr9\" (UniqueName: \"kubernetes.io/projected/728bf40a-9dfc-4850-a3cf-2b506c0fe68c-kube-api-access-kwwr9\") pod \"cluster-samples-operator-665b6dd947-98cfq\" (UID: \"728bf40a-9dfc-4850-a3cf-2b506c0fe68c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.910209 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz"] Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.923665 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bt4n\" (UniqueName: \"kubernetes.io/projected/634c4722-8d7d-463a-9844-8213116a4ce7-kube-api-access-8bt4n\") pod \"console-operator-58897d9998-jgwzk\" (UID: \"634c4722-8d7d-463a-9844-8213116a4ce7\") " pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.960404 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/510de763-cfdd-4464-a9d6-d061ad7b88bf-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.963602 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:20 crc kubenswrapper[4684]: I0121 10:08:20.992585 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6jr\" (UniqueName: \"kubernetes.io/projected/9ffd57a7-387f-48ec-9643-a67d392ce9c0-kube-api-access-9d6jr\") pod \"migrator-59844c95c7-65tjm\" (UID: \"9ffd57a7-387f-48ec-9643-a67d392ce9c0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.002702 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.007140 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6l5m\" (UniqueName: \"kubernetes.io/projected/7f11428f-574a-480a-b8af-b9a65d9de720-kube-api-access-s6l5m\") pod \"package-server-manager-789f6589d5-dcnrm\" (UID: \"7f11428f-574a-480a-b8af-b9a65d9de720\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.013681 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.025158 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wzmf\" (UniqueName: \"kubernetes.io/projected/839bb9d6-4528-484e-9e12-ea749b5e177c-kube-api-access-2wzmf\") pod \"packageserver-d55dfcdfc-dd9dl\" (UID: \"839bb9d6-4528-484e-9e12-ea749b5e177c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.036191 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.036470 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.049656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9sw\" (UniqueName: \"kubernetes.io/projected/e6097625-6479-4b86-b888-78d72d1c7089-kube-api-access-wq9sw\") pod \"dns-operator-744455d44c-x97lq\" (UID: \"e6097625-6479-4b86-b888-78d72d1c7089\") " pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.055195 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.061721 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp9sw\" (UniqueName: \"kubernetes.io/projected/c5367e25-e2b2-4280-8e56-ad67c088c382-kube-api-access-sp9sw\") pod \"olm-operator-6b444d44fb-v6fbz\" (UID: \"c5367e25-e2b2-4280-8e56-ad67c088c382\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.086212 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.088926 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gctt\" (UniqueName: \"kubernetes.io/projected/50a26307-43c9-4325-b992-91cac18db66c-kube-api-access-8gctt\") pod \"router-default-5444994796-6g6vs\" (UID: \"50a26307-43c9-4325-b992-91cac18db66c\") " pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.111921 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479w4\" (UniqueName: \"kubernetes.io/projected/d0eb2234-baac-4251-8bf4-631c7407c09f-kube-api-access-479w4\") pod \"service-ca-operator-777779d784-qjbj4\" (UID: \"d0eb2234-baac-4251-8bf4-631c7407c09f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.112644 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.124009 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf9nf\" (UniqueName: \"kubernetes.io/projected/6f6dc594-89a5-4c0f-9132-296c2e028c7b-kube-api-access-gf9nf\") pod \"service-ca-9c57cc56f-c5hr7\" (UID: \"6f6dc594-89a5-4c0f-9132-296c2e028c7b\") " pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.151216 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.159421 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwn2\" (UniqueName: \"kubernetes.io/projected/510de763-cfdd-4464-a9d6-d061ad7b88bf-kube-api-access-vxwn2\") pod \"ingress-operator-5b745b69d9-9x6qf\" (UID: \"510de763-cfdd-4464-a9d6-d061ad7b88bf\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.208988 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" Jan 21 10:08:21 crc kubenswrapper[4684]: W0121 10:08:21.230453 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a7f0592_7c89_496c_acb0_3ae031dbffb1.slice/crio-d4aa5466f7148d0c58a812dd7d58be89d4d0258a7bc1c9f21d0addc8f6b57464 WatchSource:0}: Error finding container d4aa5466f7148d0c58a812dd7d58be89d4d0258a7bc1c9f21d0addc8f6b57464: Status 404 returned error can't find the container with id d4aa5466f7148d0c58a812dd7d58be89d4d0258a7bc1c9f21d0addc8f6b57464 Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.231869 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234757 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234828 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4ngz\" (UniqueName: \"kubernetes.io/projected/34c69792-c774-40f9-a0de-d6f1a6f82714-kube-api-access-x4ngz\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234876 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-tls\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234895 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234926 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g69fb\" (UniqueName: \"kubernetes.io/projected/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-kube-api-access-g69fb\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234945 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234979 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-proxy-tls\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.234999 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-certificates\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235026 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-bound-sa-token\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235047 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/366b5359-ea1c-4565-ae7b-53a71eadfa3e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235068 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235086 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235117 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgcj\" (UniqueName: \"kubernetes.io/projected/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-kube-api-access-vwgcj\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235150 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tw56\" (UniqueName: \"kubernetes.io/projected/ecc3b392-1d71-4544-93ac-c93169931c41-kube-api-access-6tw56\") pod \"control-plane-machine-set-operator-78cbb6b69f-64j7q\" (UID: \"ecc3b392-1d71-4544-93ac-c93169931c41\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235179 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d9a24c5-ed0d-473e-a562-ee476c663ed5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dg7k7\" (UID: \"0d9a24c5-ed0d-473e-a562-ee476c663ed5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235212 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235234 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366b5359-ea1c-4565-ae7b-53a71eadfa3e-config\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235258 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgf6\" (UniqueName: \"kubernetes.io/projected/0d9a24c5-ed0d-473e-a562-ee476c663ed5-kube-api-access-wqgf6\") pod \"multus-admission-controller-857f4d67dd-dg7k7\" (UID: \"0d9a24c5-ed0d-473e-a562-ee476c663ed5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235294 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235313 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235344 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w45z\" (UniqueName: \"kubernetes.io/projected/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-kube-api-access-5w45z\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235412 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-trusted-ca\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235441 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235479 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366b5359-ea1c-4565-ae7b-53a71eadfa3e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235526 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jzj\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-kube-api-access-q8jzj\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235547 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecc3b392-1d71-4544-93ac-c93169931c41-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-64j7q\" (UID: \"ecc3b392-1d71-4544-93ac-c93169931c41\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235565 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-srv-cert\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2zj\" (UniqueName: \"kubernetes.io/projected/603ee3d6-73a1-4796-a857-84f9c889b3af-kube-api-access-bc2zj\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.235699 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-proxy-tls\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.239403 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:21.739316111 +0000 UTC m=+139.497399078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.244545 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gcr\" (UniqueName: \"kubernetes.io/projected/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-kube-api-access-w6gcr\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.246148 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c69792-c774-40f9-a0de-d6f1a6f82714-secret-volume\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.246270 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-images\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.246298 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c69792-c774-40f9-a0de-d6f1a6f82714-config-volume\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.246324 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-profile-collector-cert\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.247812 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.270765 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.295043 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.304625 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.313258 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.349388 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.349701 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gcr\" (UniqueName: \"kubernetes.io/projected/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-kube-api-access-w6gcr\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.349807 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c69792-c774-40f9-a0de-d6f1a6f82714-secret-volume\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.349856 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59vkh\" (UniqueName: \"kubernetes.io/projected/565d1a27-65a5-420c-bd5c-6041e0bc7c2c-kube-api-access-59vkh\") pod \"ingress-canary-vnnj7\" (UID: \"565d1a27-65a5-420c-bd5c-6041e0bc7c2c\") " pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.349944 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-images\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.349996 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c69792-c774-40f9-a0de-d6f1a6f82714-config-volume\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350024 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-profile-collector-cert\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350067 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350107 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4ngz\" (UniqueName: \"kubernetes.io/projected/34c69792-c774-40f9-a0de-d6f1a6f82714-kube-api-access-x4ngz\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350130 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-mountpoint-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350187 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-tls\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350208 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350225 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-certs\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350270 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g69fb\" (UniqueName: \"kubernetes.io/projected/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-kube-api-access-g69fb\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350289 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xmr\" (UniqueName: \"kubernetes.io/projected/843f3aca-bc65-445d-bed9-afbaa8a052d7-kube-api-access-x9xmr\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350311 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krqxr\" (UniqueName: \"kubernetes.io/projected/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-kube-api-access-krqxr\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350340 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350362 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgvp8\" (UniqueName: \"kubernetes.io/projected/7957fca4-e890-434e-bf3f-d77a83b1c99d-kube-api-access-bgvp8\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350397 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-proxy-tls\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350418 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-certificates\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350446 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-bound-sa-token\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350468 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/366b5359-ea1c-4565-ae7b-53a71eadfa3e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350508 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350576 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350662 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwgcj\" (UniqueName: \"kubernetes.io/projected/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-kube-api-access-vwgcj\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350688 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tw56\" (UniqueName: \"kubernetes.io/projected/ecc3b392-1d71-4544-93ac-c93169931c41-kube-api-access-6tw56\") pod \"control-plane-machine-set-operator-78cbb6b69f-64j7q\" (UID: \"ecc3b392-1d71-4544-93ac-c93169931c41\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350742 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/565d1a27-65a5-420c-bd5c-6041e0bc7c2c-cert\") pod \"ingress-canary-vnnj7\" (UID: \"565d1a27-65a5-420c-bd5c-6041e0bc7c2c\") " pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350763 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d9a24c5-ed0d-473e-a562-ee476c663ed5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dg7k7\" (UID: \"0d9a24c5-ed0d-473e-a562-ee476c663ed5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350794 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-node-bootstrap-token\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350822 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-csi-data-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350899 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366b5359-ea1c-4565-ae7b-53a71eadfa3e-config\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350949 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgf6\" (UniqueName: \"kubernetes.io/projected/0d9a24c5-ed0d-473e-a562-ee476c663ed5-kube-api-access-wqgf6\") pod \"multus-admission-controller-857f4d67dd-dg7k7\" (UID: \"0d9a24c5-ed0d-473e-a562-ee476c663ed5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.350969 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-socket-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351007 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351024 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351047 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w45z\" (UniqueName: \"kubernetes.io/projected/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-kube-api-access-5w45z\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351077 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7957fca4-e890-434e-bf3f-d77a83b1c99d-config-volume\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351108 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-trusted-ca\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351139 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351157 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-plugins-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351196 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366b5359-ea1c-4565-ae7b-53a71eadfa3e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351227 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7957fca4-e890-434e-bf3f-d77a83b1c99d-metrics-tls\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351273 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jzj\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-kube-api-access-q8jzj\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351294 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-registration-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351313 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-srv-cert\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.351335 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecc3b392-1d71-4544-93ac-c93169931c41-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-64j7q\" (UID: \"ecc3b392-1d71-4544-93ac-c93169931c41\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.353776 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2zj\" (UniqueName: \"kubernetes.io/projected/603ee3d6-73a1-4796-a857-84f9c889b3af-kube-api-access-bc2zj\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.353837 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-proxy-tls\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.355141 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.356132 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-certificates\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.357468 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.357620 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.360615 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d9a24c5-ed0d-473e-a562-ee476c663ed5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dg7k7\" (UID: \"0d9a24c5-ed0d-473e-a562-ee476c663ed5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.362313 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.363020 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-tls\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.364245 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-trusted-ca\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.364245 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c69792-c774-40f9-a0de-d6f1a6f82714-config-volume\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.364271 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.366060 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.367172 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.367459 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-images\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.368586 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-proxy-tls\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.368147 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:21.868116225 +0000 UTC m=+139.626199202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.369678 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/366b5359-ea1c-4565-ae7b-53a71eadfa3e-config\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.369852 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c69792-c774-40f9-a0de-d6f1a6f82714-secret-volume\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.370657 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecc3b392-1d71-4544-93ac-c93169931c41-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-64j7q\" (UID: \"ecc3b392-1d71-4544-93ac-c93169931c41\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.370909 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-profile-collector-cert\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.370909 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-srv-cert\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.371251 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.371345 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/366b5359-ea1c-4565-ae7b-53a71eadfa3e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.371592 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.372082 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-proxy-tls\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.390079 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" event={"ID":"5b70a09c-424c-4317-a719-e0dbb6eefe1b","Type":"ContainerStarted","Data":"2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.390172 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" event={"ID":"5b70a09c-424c-4317-a719-e0dbb6eefe1b","Type":"ContainerStarted","Data":"020d0c6e69d052e4dadc85918b23254c0b38766c54db2dcffd6a961cb6d565a8"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.391958 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.394109 4684 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-k9lpf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.394174 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" podUID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.394638 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" event={"ID":"89f98230-b1df-4e65-9163-a82223eb2cf5","Type":"ContainerStarted","Data":"f4418eb1d78377de59eb89cb25b90277efaec84602fc8ebc4ec7de25230ca801"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.394710 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" event={"ID":"89f98230-b1df-4e65-9163-a82223eb2cf5","Type":"ContainerStarted","Data":"86954c37071ba1efb34d704d2cc24f75dd828b13b72eee57c7e9f80adb30bd5c"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.396647 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" event={"ID":"e11d4c83-c131-4dd3-b20f-f38faad79236","Type":"ContainerStarted","Data":"4b5cbf4ec8e685a02a19e57e4cd816e0ebcc85f0d7ace04ecd9e0e2e9e6c9f63"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.396763 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4ngz\" (UniqueName: \"kubernetes.io/projected/34c69792-c774-40f9-a0de-d6f1a6f82714-kube-api-access-x4ngz\") pod \"collect-profiles-29483160-9pprb\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.411621 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" event={"ID":"47636a6f-dad5-46f0-b080-6d4c37652860","Type":"ContainerStarted","Data":"8bd814e4aa16b487a981b4c366f46c74afa95a3012f71c3e20e442dc1b323403"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.415644 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5rnkx" event={"ID":"0a7f0592-7c89-496c-acb0-3ae031dbffb1","Type":"ContainerStarted","Data":"d4aa5466f7148d0c58a812dd7d58be89d4d0258a7bc1c9f21d0addc8f6b57464"} Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.420745 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g69fb\" (UniqueName: \"kubernetes.io/projected/4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75-kube-api-access-g69fb\") pod \"catalog-operator-68c6474976-hvhlb\" (UID: \"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.439086 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-bound-sa-token\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.453321 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwgcj\" (UniqueName: \"kubernetes.io/projected/6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a-kube-api-access-vwgcj\") pod \"kube-storage-version-migrator-operator-b67b599dd-72t62\" (UID: \"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.455809 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-socket-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.456214 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7957fca4-e890-434e-bf3f-d77a83b1c99d-config-volume\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.456601 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-plugins-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.456754 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7957fca4-e890-434e-bf3f-d77a83b1c99d-metrics-tls\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.456854 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-registration-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.456995 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59vkh\" (UniqueName: \"kubernetes.io/projected/565d1a27-65a5-420c-bd5c-6041e0bc7c2c-kube-api-access-59vkh\") pod \"ingress-canary-vnnj7\" (UID: \"565d1a27-65a5-420c-bd5c-6041e0bc7c2c\") " pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.457256 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-mountpoint-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.457380 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-certs\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.457475 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xmr\" (UniqueName: \"kubernetes.io/projected/843f3aca-bc65-445d-bed9-afbaa8a052d7-kube-api-access-x9xmr\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.457550 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krqxr\" (UniqueName: \"kubernetes.io/projected/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-kube-api-access-krqxr\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.457621 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgvp8\" (UniqueName: \"kubernetes.io/projected/7957fca4-e890-434e-bf3f-d77a83b1c99d-kube-api-access-bgvp8\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.457910 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/565d1a27-65a5-420c-bd5c-6041e0bc7c2c-cert\") pod \"ingress-canary-vnnj7\" (UID: \"565d1a27-65a5-420c-bd5c-6041e0bc7c2c\") " pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.458023 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-node-bootstrap-token\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.458277 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.458431 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-csi-data-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.459789 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-socket-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.461047 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7957fca4-e890-434e-bf3f-d77a83b1c99d-config-volume\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.461191 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-plugins-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.465848 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-registration-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.466290 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-mountpoint-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.467055 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:21.967035274 +0000 UTC m=+139.725118241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.467323 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/843f3aca-bc65-445d-bed9-afbaa8a052d7-csi-data-dir\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.468641 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7957fca4-e890-434e-bf3f-d77a83b1c99d-metrics-tls\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.471893 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q9wnn"] Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.474526 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/366b5359-ea1c-4565-ae7b-53a71eadfa3e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vn2nz\" (UID: \"366b5359-ea1c-4565-ae7b-53a71eadfa3e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.474959 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-node-bootstrap-token\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.476583 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-certs\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.476985 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/565d1a27-65a5-420c-bd5c-6041e0bc7c2c-cert\") pod \"ingress-canary-vnnj7\" (UID: \"565d1a27-65a5-420c-bd5c-6041e0bc7c2c\") " pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.514305 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w45z\" (UniqueName: \"kubernetes.io/projected/cd90b7f0-d328-4cfb-ad0d-83cf58bee09d-kube-api-access-5w45z\") pod \"machine-config-controller-84d6567774-dq2dv\" (UID: \"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.527295 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gcr\" (UniqueName: \"kubernetes.io/projected/804798e6-979c-46ae-bc4b-e6bd55ce6e9b-kube-api-access-w6gcr\") pod \"machine-config-operator-74547568cd-jfblx\" (UID: \"804798e6-979c-46ae-bc4b-e6bd55ce6e9b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.540269 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.545198 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jzj\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-kube-api-access-q8jzj\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.560084 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.560713 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.060688713 +0000 UTC m=+139.818771680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.565648 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tw56\" (UniqueName: \"kubernetes.io/projected/ecc3b392-1d71-4544-93ac-c93169931c41-kube-api-access-6tw56\") pod \"control-plane-machine-set-operator-78cbb6b69f-64j7q\" (UID: \"ecc3b392-1d71-4544-93ac-c93169931c41\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.566316 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.586226 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.589872 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2zj\" (UniqueName: \"kubernetes.io/projected/603ee3d6-73a1-4796-a857-84f9c889b3af-kube-api-access-bc2zj\") pod \"marketplace-operator-79b997595-6wfjd\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.610774 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm"] Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.610837 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgf6\" (UniqueName: \"kubernetes.io/projected/0d9a24c5-ed0d-473e-a562-ee476c663ed5-kube-api-access-wqgf6\") pod \"multus-admission-controller-857f4d67dd-dg7k7\" (UID: \"0d9a24c5-ed0d-473e-a562-ee476c663ed5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.619757 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5wsq8"] Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.619919 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.626764 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krqxr\" (UniqueName: \"kubernetes.io/projected/90af4d5f-871e-4bc3-87e7-8f954cbfc5ef-kube-api-access-krqxr\") pod \"machine-config-server-cs67x\" (UID: \"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef\") " pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.628741 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.646407 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.661315 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.662182 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.162158103 +0000 UTC m=+139.920241070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.669041 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgvp8\" (UniqueName: \"kubernetes.io/projected/7957fca4-e890-434e-bf3f-d77a83b1c99d-kube-api-access-bgvp8\") pod \"dns-default-mnmmn\" (UID: \"7957fca4-e890-434e-bf3f-d77a83b1c99d\") " pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.680282 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59vkh\" (UniqueName: \"kubernetes.io/projected/565d1a27-65a5-420c-bd5c-6041e0bc7c2c-kube-api-access-59vkh\") pod \"ingress-canary-vnnj7\" (UID: \"565d1a27-65a5-420c-bd5c-6041e0bc7c2c\") " pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.694818 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vnnj7" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.703696 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cs67x" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.710664 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.754962 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b5z5t"] Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.764747 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.765011 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.765102 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.265079634 +0000 UTC m=+140.023162601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.766180 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.767311 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.267302563 +0000 UTC m=+140.025385530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.778288 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xmr\" (UniqueName: \"kubernetes.io/projected/843f3aca-bc65-445d-bed9-afbaa8a052d7-kube-api-access-x9xmr\") pod \"csi-hostpathplugin-s5mkm\" (UID: \"843f3aca-bc65-445d-bed9-afbaa8a052d7\") " pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.782064 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.854983 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.855999 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r"] Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.874808 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.875451 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.37542961 +0000 UTC m=+140.133512587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.976979 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:21 crc kubenswrapper[4684]: E0121 10:08:21.977425 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.477409117 +0000 UTC m=+140.235492084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:21 crc kubenswrapper[4684]: I0121 10:08:21.984673 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" Jan 21 10:08:22 crc kubenswrapper[4684]: W0121 10:08:22.056855 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7bad9ca_8600_4221_b33c_89e055ff177d.slice/crio-ce1cf6b414cd9ed28096cff43114655edd6aee45ca6a2effe3099df0d5f65f28 WatchSource:0}: Error finding container ce1cf6b414cd9ed28096cff43114655edd6aee45ca6a2effe3099df0d5f65f28: Status 404 returned error can't find the container with id ce1cf6b414cd9ed28096cff43114655edd6aee45ca6a2effe3099df0d5f65f28 Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.078466 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.57844337 +0000 UTC m=+140.336526337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.078508 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.078863 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.079168 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.579160066 +0000 UTC m=+140.337243033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.180512 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.180777 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.68074571 +0000 UTC m=+140.438828677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.181098 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.181552 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.681531227 +0000 UTC m=+140.439614204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.282112 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.282628 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.782609833 +0000 UTC m=+140.540692800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.384063 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.384488 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.884476106 +0000 UTC m=+140.642559073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.443696 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cs67x" event={"ID":"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef","Type":"ContainerStarted","Data":"5daec8648bb3c853632c7a0ae0dc76aa85b359bb67f691466a252913b8c1c7d4"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.446465 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" event={"ID":"e7bad9ca-8600-4221-b33c-89e055ff177d","Type":"ContainerStarted","Data":"ce1cf6b414cd9ed28096cff43114655edd6aee45ca6a2effe3099df0d5f65f28"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.453437 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" event={"ID":"42c27350-86e3-4a02-9194-5dd24c297a12","Type":"ContainerStarted","Data":"d09eb5a3a9254e37e39a72776ee55ade4570f390536ec4b409f76ee168b95dd9"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.454340 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" event={"ID":"9ffd57a7-387f-48ec-9643-a67d392ce9c0","Type":"ContainerStarted","Data":"f44f44cea813645dda5a6fb6d1d08b4f73a1c3e25d308fdaf3c50f7bc74dfef3"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.458691 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5rnkx" event={"ID":"0a7f0592-7c89-496c-acb0-3ae031dbffb1","Type":"ContainerStarted","Data":"d77ed17e850b1b947bf0595607ae3a924d57a12741dd29cb0258ad1aa83487ed"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.459031 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.460221 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" event={"ID":"71784ae3-e88b-48b3-94e4-f312f673084d","Type":"ContainerStarted","Data":"c4dc7ab50e8c6ac3046fab9af53e70f73e4b812554f7f40a190ef5b2ead6d804"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.462465 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" event={"ID":"e11d4c83-c131-4dd3-b20f-f38faad79236","Type":"ContainerStarted","Data":"ef64c11ec8fc12c752e63509b6324b777dd86bdbb8417f474596a7715c1dd932"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.466174 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rnkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.466235 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rnkx" podUID="0a7f0592-7c89-496c-acb0-3ae031dbffb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.479803 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6g6vs" event={"ID":"50a26307-43c9-4325-b992-91cac18db66c","Type":"ContainerStarted","Data":"977d759f9f3fd10452a080a4acca25cc99afd212ac50d3fe537d65d83eb3dcb8"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.481839 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" event={"ID":"9379cfa5-977e-46d6-b841-f1b8c99a6c75","Type":"ContainerStarted","Data":"6b7da730989f3fedab0e61fe9c389a7d8e428eec3c1626fd6c4ee0ff1d000039"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.486851 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" event={"ID":"47636a6f-dad5-46f0-b080-6d4c37652860","Type":"ContainerStarted","Data":"8dd938057e53a5c1a04d3aa9274520aa3d693e31d849d34fde4546591fde8358"} Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.487048 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.487574 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:22.987552932 +0000 UTC m=+140.745635899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.497928 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.602703 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.617751 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.117719466 +0000 UTC m=+140.875802433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.705652 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.706138 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.206112278 +0000 UTC m=+140.964195245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.809149 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.809711 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.309694723 +0000 UTC m=+141.067777690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.870308 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" podStartSLOduration=120.870290824 podStartE2EDuration="2m0.870290824s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:22.827159156 +0000 UTC m=+140.585242143" watchObservedRunningTime="2026-01-21 10:08:22.870290824 +0000 UTC m=+140.628373791" Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.917021 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.917379 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.417333051 +0000 UTC m=+141.175416028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.917817 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:22 crc kubenswrapper[4684]: E0121 10:08:22.918240 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.418231334 +0000 UTC m=+141.176314301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:22 crc kubenswrapper[4684]: I0121 10:08:22.987509 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pxz9b" podStartSLOduration=121.987481563 podStartE2EDuration="2m1.987481563s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:22.984662503 +0000 UTC m=+140.742745480" watchObservedRunningTime="2026-01-21 10:08:22.987481563 +0000 UTC m=+140.745564530" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.019944 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.020475 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.52045402 +0000 UTC m=+141.278536977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.121701 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.122437 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.622422647 +0000 UTC m=+141.380505614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.223278 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.223604 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.723570434 +0000 UTC m=+141.481653401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.223703 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.224288 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.72428157 +0000 UTC m=+141.482364537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.274755 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hz6v7" podStartSLOduration=122.27473531 podStartE2EDuration="2m2.27473531s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.273525466 +0000 UTC m=+141.031608433" watchObservedRunningTime="2026-01-21 10:08:23.27473531 +0000 UTC m=+141.032818277" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.325230 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.325455 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.825417587 +0000 UTC m=+141.583500554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.325503 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.325867 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.825854993 +0000 UTC m=+141.583938140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.428805 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.429717 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:23.929695526 +0000 UTC m=+141.687778493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.461516 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5rnkx" podStartSLOduration=121.46147628 podStartE2EDuration="2m1.46147628s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.445243341 +0000 UTC m=+141.203326308" watchObservedRunningTime="2026-01-21 10:08:23.46147628 +0000 UTC m=+141.219559247" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.531398 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.538731 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" event={"ID":"71784ae3-e88b-48b3-94e4-f312f673084d","Type":"ContainerStarted","Data":"59d5dafa1d344006be464b4920892af230af19a8e08dff99b83e223325d40c94"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.542788 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cs67x" event={"ID":"90af4d5f-871e-4bc3-87e7-8f954cbfc5ef","Type":"ContainerStarted","Data":"ece4a2383201dad9fc397c9c2c6e4ce42287d9691988b2207d4d90d0428d5ded"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.548219 4684 generic.go:334] "Generic (PLEG): container finished" podID="e7bad9ca-8600-4221-b33c-89e055ff177d" containerID="d452b939eecd65e892c95374341cf9b3a6371854dc13fa33516a918c14dbaede" exitCode=0 Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.548883 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" event={"ID":"e7bad9ca-8600-4221-b33c-89e055ff177d","Type":"ContainerDied","Data":"d452b939eecd65e892c95374341cf9b3a6371854dc13fa33516a918c14dbaede"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.551813 4684 generic.go:334] "Generic (PLEG): container finished" podID="9379cfa5-977e-46d6-b841-f1b8c99a6c75" containerID="2ddbd5984931eef5b339e35d961e0c526b0401dc3e143b8cbb840aff4c58abc2" exitCode=0 Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.551927 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" event={"ID":"9379cfa5-977e-46d6-b841-f1b8c99a6c75","Type":"ContainerDied","Data":"2ddbd5984931eef5b339e35d961e0c526b0401dc3e143b8cbb840aff4c58abc2"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.582994 4684 generic.go:334] "Generic (PLEG): container finished" podID="47636a6f-dad5-46f0-b080-6d4c37652860" containerID="8dd938057e53a5c1a04d3aa9274520aa3d693e31d849d34fde4546591fde8358" exitCode=0 Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.583146 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" event={"ID":"47636a6f-dad5-46f0-b080-6d4c37652860","Type":"ContainerDied","Data":"8dd938057e53a5c1a04d3aa9274520aa3d693e31d849d34fde4546591fde8358"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.583188 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" event={"ID":"47636a6f-dad5-46f0-b080-6d4c37652860","Type":"ContainerStarted","Data":"faa5cc4603c047b4b48d03d9447ad3a082df8e6a5b0a1b59d0c3d98977972e5b"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.583541 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.594629 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.084608742 +0000 UTC m=+141.842691709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.597637 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tv48v"] Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.628520 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm"] Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.633406 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.635570 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.135548549 +0000 UTC m=+141.893631516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.641430 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" event={"ID":"42c27350-86e3-4a02-9194-5dd24c297a12","Type":"ContainerStarted","Data":"09af83433ee06d86f3a07de85a8a7d6faeab1ecde6eb9c3f46fb74c2afcd8d51"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.641497 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" event={"ID":"42c27350-86e3-4a02-9194-5dd24c297a12","Type":"ContainerStarted","Data":"f487a98e78e6f0b33e4fdb73a01c1a7b8eefb29498398d43138438046418d840"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.646217 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-x97lq"] Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.657517 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb"] Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.663651 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5wsq8" podStartSLOduration=122.66362182 podStartE2EDuration="2m2.66362182s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.583121869 +0000 UTC m=+141.341204836" watchObservedRunningTime="2026-01-21 10:08:23.66362182 +0000 UTC m=+141.421704787" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.668262 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6g6vs" event={"ID":"50a26307-43c9-4325-b992-91cac18db66c","Type":"ContainerStarted","Data":"4f85831e785805f1a28b79f3fd69d365342fb7687494e54bc7d9697df9576f58"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.686082 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" event={"ID":"9ffd57a7-387f-48ec-9643-a67d392ce9c0","Type":"ContainerStarted","Data":"55dc23823f499ec54536f34974da194d646a2ce40f05611e59696ec26d481a7b"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.686166 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" event={"ID":"9ffd57a7-387f-48ec-9643-a67d392ce9c0","Type":"ContainerStarted","Data":"ba5df5ead3e86e83e68fa4883172603702f6e5b288f5549a97f1c393a2390ffd"} Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.693621 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rnkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.693685 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rnkx" podUID="0a7f0592-7c89-496c-acb0-3ae031dbffb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.713745 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cs67x" podStartSLOduration=5.713722477 podStartE2EDuration="5.713722477s" podCreationTimestamp="2026-01-21 10:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.709585319 +0000 UTC m=+141.467668316" watchObservedRunningTime="2026-01-21 10:08:23.713722477 +0000 UTC m=+141.471805444" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.735105 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-65tjm" podStartSLOduration=121.735080158 podStartE2EDuration="2m1.735080158s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.73233327 +0000 UTC m=+141.490416237" watchObservedRunningTime="2026-01-21 10:08:23.735080158 +0000 UTC m=+141.493163125" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.737547 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.738081 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.238062955 +0000 UTC m=+141.996145922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.759823 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" podStartSLOduration=121.759794771 podStartE2EDuration="2m1.759794771s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.756194642 +0000 UTC m=+141.514277609" watchObservedRunningTime="2026-01-21 10:08:23.759794771 +0000 UTC m=+141.517877738" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.772584 4684 csr.go:261] certificate signing request csr-sntfm is approved, waiting to be issued Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.780955 4684 csr.go:257] certificate signing request csr-sntfm is issued Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.783489 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-6g6vs" podStartSLOduration=121.783462764 podStartE2EDuration="2m1.783462764s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.782190189 +0000 UTC m=+141.540273166" watchObservedRunningTime="2026-01-21 10:08:23.783462764 +0000 UTC m=+141.541545731" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.809349 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q9wnn" podStartSLOduration=121.809331587 podStartE2EDuration="2m1.809331587s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:23.807980438 +0000 UTC m=+141.566063405" watchObservedRunningTime="2026-01-21 10:08:23.809331587 +0000 UTC m=+141.567414554" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.838310 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.838606 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.338562499 +0000 UTC m=+142.096645466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.839138 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.840580 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.340100015 +0000 UTC m=+142.098182982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.940810 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.944490 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.944686 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.444656344 +0000 UTC m=+142.202739311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:23 crc kubenswrapper[4684]: I0121 10:08:23.944982 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:23 crc kubenswrapper[4684]: E0121 10:08:23.946432 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.446412266 +0000 UTC m=+142.204495453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.023963 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9379cfa5_977e_46d6_b841_f1b8c99a6c75.slice/crio-2ddbd5984931eef5b339e35d961e0c526b0401dc3e143b8cbb840aff4c58abc2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.048046 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.049171 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.549146461 +0000 UTC m=+142.307229428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.150730 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.151503 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.651489351 +0000 UTC m=+142.409572318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.230167 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jmrsp"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.252752 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.253388 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.753342504 +0000 UTC m=+142.511425471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.260521 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.275801 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.286608 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jgwzk"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.296872 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.304826 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:24 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:24 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:24 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.304951 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.306658 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x6zks"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.319935 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.338056 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-plndh"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.355269 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.356475 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.856455831 +0000 UTC m=+142.614538798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.365217 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.460653 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.461145 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:24.961126605 +0000 UTC m=+142.719209572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.562243 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.562832 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.062806302 +0000 UTC m=+142.820889269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.656895 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.656960 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.664454 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.665119 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.16509149 +0000 UTC m=+142.923174457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.681965 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.709708 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.719945 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.743459 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.750158 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.752961 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.774643 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.775384 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.275339153 +0000 UTC m=+143.033422130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.783242 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 10:03:23 +0000 UTC, rotation deadline is 2026-11-08 09:46:44.081425151 +0000 UTC Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.783284 4684 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6983h38m19.298145515s for next certificate rotation Jan 21 10:08:24 crc kubenswrapper[4684]: W0121 10:08:24.784563 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f0d0b06_b45f_4c0b_9ce3_bcc3b0da6c75.slice/crio-7d26345d5facfa7612d34f5e5f0aeec23577168ef74a08bd8d4ccc84ac0bec1e WatchSource:0}: Error finding container 7d26345d5facfa7612d34f5e5f0aeec23577168ef74a08bd8d4ccc84ac0bec1e: Status 404 returned error can't find the container with id 7d26345d5facfa7612d34f5e5f0aeec23577168ef74a08bd8d4ccc84ac0bec1e Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.794023 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vnnj7"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.797825 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" event={"ID":"9379cfa5-977e-46d6-b841-f1b8c99a6c75","Type":"ContainerStarted","Data":"eb2c1a1a2c933d63841edcc274b3431c224075c53b73cf72ba18064db24ebbb2"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.803868 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" event={"ID":"983911ed-39aa-4717-bc5b-a03c1a5ab47d","Type":"ContainerStarted","Data":"764e561fd99fd7e583bc323fe9b9dc09e8ef698a8f0986f632d927b0f3468f18"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.805565 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" event={"ID":"af694d93-1240-4a87-a2fe-153bb2401143","Type":"ContainerStarted","Data":"b0138d88a2d62021f9c030d1d37ffd881dc020407df9b147297ef764513a8908"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.812187 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" event={"ID":"638fa066-691d-4c65-9548-dc4b2fd35640","Type":"ContainerStarted","Data":"34abdf898d422f2261c201a558560dc537910a2638a18848248cfb3181294f52"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.812263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" event={"ID":"638fa066-691d-4c65-9548-dc4b2fd35640","Type":"ContainerStarted","Data":"3cdbc23b05af1de049c8419c896f593d167ed2280207ef2c06c1b20f5dba5603"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.828806 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.829466 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6zks" event={"ID":"d1cc5379-f931-4e88-b422-4ecc9e33e2a0","Type":"ContainerStarted","Data":"de10cba92bb2ad6a69c6f8058a0bef9ed6075bf2508ce8edae8f02f2639cde97"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.842137 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" event={"ID":"c5367e25-e2b2-4280-8e56-ad67c088c382","Type":"ContainerStarted","Data":"ad25870e8f0e1240a88ceef53cea551fa6500ba9bc0163a824d935f727aff242"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.843834 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.846616 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" event={"ID":"e9ebaa53-a616-4d3f-a69f-19da113978c3","Type":"ContainerStarted","Data":"8f91f62ed7f27f210afbf10654fc06da91447ec2d101e31c06ba1e30c92f96c9"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.847053 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.849640 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" event={"ID":"634c4722-8d7d-463a-9844-8213116a4ce7","Type":"ContainerStarted","Data":"664e3fc24b00d18ca01225548c680d647a559fae009c76ffe65f8fb87df712b3"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.853348 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" event={"ID":"2729c786-b7ea-4398-8023-bc7be080c44f","Type":"ContainerStarted","Data":"a82b4669c3e65431daa013db00e75342ca3932741121902dfb844ec978bb1afc"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.855836 4684 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-v6fbz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.855920 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" podUID="c5367e25-e2b2-4280-8e56-ad67c088c382" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.858719 4684 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jmrsp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.858832 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" podUID="e9ebaa53-a616-4d3f-a69f-19da113978c3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.860453 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" event={"ID":"2a16e132-7543-4642-bd55-b3abf288e009","Type":"ContainerStarted","Data":"94354d2d5672e3e90ff1d041a81e43dce8c5a5fb1a4927da893ad0953052dfb4"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.869882 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" podStartSLOduration=122.869851744 podStartE2EDuration="2m2.869851744s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:24.866232254 +0000 UTC m=+142.624315221" watchObservedRunningTime="2026-01-21 10:08:24.869851744 +0000 UTC m=+142.627934711" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.871915 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jpqvb" podStartSLOduration=122.871906877 podStartE2EDuration="2m2.871906877s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:24.833507687 +0000 UTC m=+142.591590654" watchObservedRunningTime="2026-01-21 10:08:24.871906877 +0000 UTC m=+142.629989844" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.877350 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.879133 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.377521647 +0000 UTC m=+143.135604614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.882105 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.893773 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c5hr7"] Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.893778 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.393730746 +0000 UTC m=+143.151813713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.904576 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wfjd"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.921216 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" event={"ID":"e7bad9ca-8600-4221-b33c-89e055ff177d","Type":"ContainerStarted","Data":"c203acecbf356ecf0e6092258a1ab320fac9aec18537c543407b2d7be646c197"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.922781 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" podStartSLOduration=122.922764841 podStartE2EDuration="2m2.922764841s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:24.904946976 +0000 UTC m=+142.663029943" watchObservedRunningTime="2026-01-21 10:08:24.922764841 +0000 UTC m=+142.680847808" Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.930871 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-s5mkm"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.935678 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" event={"ID":"7f11428f-574a-480a-b8af-b9a65d9de720","Type":"ContainerStarted","Data":"41ee9c047657eafa82d6c5558253bf44c687cc41c7427be2d56df287cca59913"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.935731 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" event={"ID":"7f11428f-574a-480a-b8af-b9a65d9de720","Type":"ContainerStarted","Data":"4b7285b54c05857c1ae0503cdbecd3f57a9e4b23b056f68c2e88428ce1252c2e"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.959694 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dg7k7"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.971240 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" event={"ID":"728bf40a-9dfc-4850-a3cf-2b506c0fe68c","Type":"ContainerStarted","Data":"5fdf6655c619870bc2ba7ab76e50b82721002ca7534dc1feb13996738070681f"} Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.973931 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q"] Jan 21 10:08:24 crc kubenswrapper[4684]: I0121 10:08:24.983428 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:24 crc kubenswrapper[4684]: E0121 10:08:24.985671 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.485643824 +0000 UTC m=+143.243726791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.016037 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" podStartSLOduration=123.016006767 podStartE2EDuration="2m3.016006767s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:24.965025288 +0000 UTC m=+142.723108255" watchObservedRunningTime="2026-01-21 10:08:25.016006767 +0000 UTC m=+142.774089734" Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.021864 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" event={"ID":"e6097625-6479-4b86-b888-78d72d1c7089","Type":"ContainerStarted","Data":"0410e03e98a344cc2d1f23c78489fdcbcc2c2acabbb088316e9c19387b7a493d"} Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.065000 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mnmmn"] Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.066337 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx"] Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.084126 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rjpqz" Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.086345 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.091597 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.591574842 +0000 UTC m=+143.349657799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: W0121 10:08:25.137894 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843f3aca_bc65_445d_bed9_afbaa8a052d7.slice/crio-c1f8592f2900f212446c331bcffaa3f7f18693f18ab39e74da4b6b96d7379d51 WatchSource:0}: Error finding container c1f8592f2900f212446c331bcffaa3f7f18693f18ab39e74da4b6b96d7379d51: Status 404 returned error can't find the container with id c1f8592f2900f212446c331bcffaa3f7f18693f18ab39e74da4b6b96d7379d51 Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.192081 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.192673 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.692327685 +0000 UTC m=+143.450410662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: W0121 10:08:25.194856 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d9a24c5_ed0d_473e_a562_ee476c663ed5.slice/crio-3881fb59b4dd5c6d09af7c1e76c6946688d2bea3570f110ff59053c0e0ff896a WatchSource:0}: Error finding container 3881fb59b4dd5c6d09af7c1e76c6946688d2bea3570f110ff59053c0e0ff896a: Status 404 returned error can't find the container with id 3881fb59b4dd5c6d09af7c1e76c6946688d2bea3570f110ff59053c0e0ff896a Jan 21 10:08:25 crc kubenswrapper[4684]: W0121 10:08:25.218670 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc3b392_1d71_4544_93ac_c93169931c41.slice/crio-77ee79d555ac36d0da9ac7e7f4144ba43c6eea10fe43b85f60e2804c27409f98 WatchSource:0}: Error finding container 77ee79d555ac36d0da9ac7e7f4144ba43c6eea10fe43b85f60e2804c27409f98: Status 404 returned error can't find the container with id 77ee79d555ac36d0da9ac7e7f4144ba43c6eea10fe43b85f60e2804c27409f98 Jan 21 10:08:25 crc kubenswrapper[4684]: W0121 10:08:25.230413 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804798e6_979c_46ae_bc4b_e6bd55ce6e9b.slice/crio-60fd975c2a8f64401c102d19a42d6f30555352c871c9510625e0887dcd1fcd52 WatchSource:0}: Error finding container 60fd975c2a8f64401c102d19a42d6f30555352c871c9510625e0887dcd1fcd52: Status 404 returned error can't find the container with id 60fd975c2a8f64401c102d19a42d6f30555352c871c9510625e0887dcd1fcd52 Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.294287 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.294816 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.79479915 +0000 UTC m=+143.552882117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.300623 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:25 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:25 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:25 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.300722 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.396839 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.397318 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.897290577 +0000 UTC m=+143.655373544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.397430 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.398002 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.897990271 +0000 UTC m=+143.656073238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.499073 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.499569 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:25.999548904 +0000 UTC m=+143.757631871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.600723 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.601149 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.101129467 +0000 UTC m=+143.859212424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.703195 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.703351 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.203324862 +0000 UTC m=+143.961407839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.703741 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.704322 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.204306167 +0000 UTC m=+143.962389134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.804940 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.805448 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.305351491 +0000 UTC m=+144.063434458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:25 crc kubenswrapper[4684]: I0121 10:08:25.906433 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:25 crc kubenswrapper[4684]: E0121 10:08:25.906996 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.406980317 +0000 UTC m=+144.165063284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.019904 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.020656 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.520617859 +0000 UTC m=+144.278700836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.086964 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" event={"ID":"728bf40a-9dfc-4850-a3cf-2b506c0fe68c","Type":"ContainerStarted","Data":"e740b6268653ed387efae69f2d5917dca6b8f10f25cef564ec962b070ac6b898"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.104921 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" event={"ID":"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75","Type":"ContainerStarted","Data":"7d26345d5facfa7612d34f5e5f0aeec23577168ef74a08bd8d4ccc84ac0bec1e"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.105398 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.118270 4684 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hvhlb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.118347 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" podUID="4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.120549 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" event={"ID":"7f11428f-574a-480a-b8af-b9a65d9de720","Type":"ContainerStarted","Data":"5b2c413595d53e945477d6efc1f889f669ddc1124ffd821a062ad88fd63a45eb"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.121337 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.125786 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.126325 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.626308159 +0000 UTC m=+144.384391126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.126792 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.126826 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.141195 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" event={"ID":"e6097625-6479-4b86-b888-78d72d1c7089","Type":"ContainerStarted","Data":"f0277487d929d29f8c04104a93ad9b50a92d74fcfbbfd241866d5a6c62d7c2ad"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.143144 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" podStartSLOduration=124.143127499 podStartE2EDuration="2m4.143127499s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.142344301 +0000 UTC m=+143.900427268" watchObservedRunningTime="2026-01-21 10:08:26.143127499 +0000 UTC m=+143.901210466" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.148699 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" event={"ID":"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc","Type":"ContainerStarted","Data":"1e51cc17d6c56d91990156632623867f80b54e890163b99b8803993217f60a75"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.198266 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.198638 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" event={"ID":"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a","Type":"ContainerStarted","Data":"647062c4c9de32f9a98795321cdac676437e5b08c437453eca19f7c14147177a"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.204441 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" event={"ID":"804798e6-979c-46ae-bc4b-e6bd55ce6e9b","Type":"ContainerStarted","Data":"60fd975c2a8f64401c102d19a42d6f30555352c871c9510625e0887dcd1fcd52"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.210308 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" event={"ID":"34c69792-c774-40f9-a0de-d6f1a6f82714","Type":"ContainerStarted","Data":"392b7419a03d980ad1336dc178d203186f5ce7cd936f5cbb08d149a772961c37"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.210372 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" event={"ID":"34c69792-c774-40f9-a0de-d6f1a6f82714","Type":"ContainerStarted","Data":"46e4215bf57151971d6622e7f2597def2d73229893abce5bafc25cca2cd28edf"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.227065 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.228637 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" event={"ID":"510de763-cfdd-4464-a9d6-d061ad7b88bf","Type":"ContainerStarted","Data":"1fdd05463211279f0144c7046d04e7f97814c3fbbe7502352cc9d9b6ffaaf28d"} Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.229492 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.729461318 +0000 UTC m=+144.487544435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.257092 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" podStartSLOduration=124.257065513 podStartE2EDuration="2m4.257065513s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.17934586 +0000 UTC m=+143.937428827" watchObservedRunningTime="2026-01-21 10:08:26.257065513 +0000 UTC m=+144.015148470" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.289480 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" event={"ID":"9379cfa5-977e-46d6-b841-f1b8c99a6c75","Type":"ContainerStarted","Data":"516c93819153df52efa084c52a42cd5ed83255c644847d05267d8be7de27cfd0"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.301775 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mnmmn" event={"ID":"7957fca4-e890-434e-bf3f-d77a83b1c99d","Type":"ContainerStarted","Data":"2b7d055d2c4b37c01be773ed57b50b6c7e9f8cdc9ebcc55423fb030042fcf602"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.302601 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:26 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:26 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:26 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.302667 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.320787 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" event={"ID":"983911ed-39aa-4717-bc5b-a03c1a5ab47d","Type":"ContainerStarted","Data":"3b908ef6897542695de96564150492e52655c530543364caead2a58969676b35"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.329967 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.330522 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.830507292 +0000 UTC m=+144.588590259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.331269 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" podStartSLOduration=124.331257889 podStartE2EDuration="2m4.331257889s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.286462491 +0000 UTC m=+144.044545468" watchObservedRunningTime="2026-01-21 10:08:26.331257889 +0000 UTC m=+144.089340856" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.370065 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" event={"ID":"843f3aca-bc65-445d-bed9-afbaa8a052d7","Type":"ContainerStarted","Data":"c1f8592f2900f212446c331bcffaa3f7f18693f18ab39e74da4b6b96d7379d51"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.389499 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-t4gw6" podStartSLOduration=124.389480616 podStartE2EDuration="2m4.389480616s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.388543983 +0000 UTC m=+144.146626950" watchObservedRunningTime="2026-01-21 10:08:26.389480616 +0000 UTC m=+144.147563583" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.398049 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" podStartSLOduration=125.398035671 podStartE2EDuration="2m5.398035671s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.336576579 +0000 UTC m=+144.094659546" watchObservedRunningTime="2026-01-21 10:08:26.398035671 +0000 UTC m=+144.156118638" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.413039 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" event={"ID":"af694d93-1240-4a87-a2fe-153bb2401143","Type":"ContainerStarted","Data":"6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.418002 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.438904 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.440241 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:26.940206235 +0000 UTC m=+144.698289202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.450940 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" podStartSLOduration=125.450921407 podStartE2EDuration="2m5.450921407s" podCreationTimestamp="2026-01-21 10:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.448691697 +0000 UTC m=+144.206774654" watchObservedRunningTime="2026-01-21 10:08:26.450921407 +0000 UTC m=+144.209004374" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.464097 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" event={"ID":"0d9a24c5-ed0d-473e-a562-ee476c663ed5","Type":"ContainerStarted","Data":"3881fb59b4dd5c6d09af7c1e76c6946688d2bea3570f110ff59053c0e0ff896a"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.544884 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.561853 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.061673168 +0000 UTC m=+144.819756135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.589111 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" podStartSLOduration=124.589090665 podStartE2EDuration="2m4.589090665s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.586518854 +0000 UTC m=+144.344601831" watchObservedRunningTime="2026-01-21 10:08:26.589090665 +0000 UTC m=+144.347173622" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.646615 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.647964 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.147936674 +0000 UTC m=+144.906019651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.689406 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" podStartSLOduration=124.689383262 podStartE2EDuration="2m4.689383262s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.683863416 +0000 UTC m=+144.441946383" watchObservedRunningTime="2026-01-21 10:08:26.689383262 +0000 UTC m=+144.447466229" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.744507 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" podStartSLOduration=124.744488038 podStartE2EDuration="2m4.744488038s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.741905425 +0000 UTC m=+144.499988392" watchObservedRunningTime="2026-01-21 10:08:26.744488038 +0000 UTC m=+144.502571005" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.748983 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.756779 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.256761316 +0000 UTC m=+145.014844273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.832286 4684 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dd9dl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.832346 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" podUID="839bb9d6-4528-484e-9e12-ea749b5e177c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.832348 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" podStartSLOduration=124.832325501 podStartE2EDuration="2m4.832325501s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.786481995 +0000 UTC m=+144.544564962" watchObservedRunningTime="2026-01-21 10:08:26.832325501 +0000 UTC m=+144.590408468" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.849986 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.850589 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x6zks" podStartSLOduration=124.850561422 podStartE2EDuration="2m4.850561422s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.832781857 +0000 UTC m=+144.590864824" watchObservedRunningTime="2026-01-21 10:08:26.850561422 +0000 UTC m=+144.608644379" Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.851546 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.351346859 +0000 UTC m=+145.109429836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.932922 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vnnj7" podStartSLOduration=8.932895358 podStartE2EDuration="8.932895358s" podCreationTimestamp="2026-01-21 10:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.921298904 +0000 UTC m=+144.679381881" watchObservedRunningTime="2026-01-21 10:08:26.932895358 +0000 UTC m=+144.690978325" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.933261 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" podStartSLOduration=124.933254901 podStartE2EDuration="2m4.933254901s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:26.890951032 +0000 UTC m=+144.649033999" watchObservedRunningTime="2026-01-21 10:08:26.933254901 +0000 UTC m=+144.691337868" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951775 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" event={"ID":"6f6dc594-89a5-4c0f-9132-296c2e028c7b","Type":"ContainerStarted","Data":"905d2137ce6dc249a0f7297936b897338b08533906e61cdfb2f2b02502e4dec5"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951832 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951853 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" event={"ID":"366b5359-ea1c-4565-ae7b-53a71eadfa3e","Type":"ContainerStarted","Data":"a83fa6ee78e81ec7e531ebd477db8d9d6ed7a60dffae5dd87b0236efc8421c05"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951867 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" event={"ID":"634c4722-8d7d-463a-9844-8213116a4ce7","Type":"ContainerStarted","Data":"c4fc58e5001dbb8a6b33cd226e75c40fcd278978b430821092037dee1f0fd83d"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951902 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951914 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951923 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" event={"ID":"d0eb2234-baac-4251-8bf4-631c7407c09f","Type":"ContainerStarted","Data":"09dab35c0481a9610c19db95cf34b7af64510b73c3cb966df561459a1f577b8c"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951936 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" event={"ID":"e9ebaa53-a616-4d3f-a69f-19da113978c3","Type":"ContainerStarted","Data":"c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951954 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951963 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" event={"ID":"603ee3d6-73a1-4796-a857-84f9c889b3af","Type":"ContainerStarted","Data":"1fedff62e2ca96a046c5e0f2649198f0bfaaa6171d161e0c6de7c32e99dd019c"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.951986 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tv48v" event={"ID":"2729c786-b7ea-4398-8023-bc7be080c44f","Type":"ContainerStarted","Data":"92a8f72c7c9491ed1807b9214099cb60515a36c9919195aeb38405bc4f1545e4"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952014 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qsl5r" Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952023 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x6zks" event={"ID":"d1cc5379-f931-4e88-b422-4ecc9e33e2a0","Type":"ContainerStarted","Data":"8b1e4e6bdd843af2c9b69052d95ebde8088303e494b28a5efe45c9557c1dc01d"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952040 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" event={"ID":"ecc3b392-1d71-4544-93ac-c93169931c41","Type":"ContainerStarted","Data":"77ee79d555ac36d0da9ac7e7f4144ba43c6eea10fe43b85f60e2804c27409f98"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952050 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" event={"ID":"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d","Type":"ContainerStarted","Data":"e242b966d18d1924eb3cb7523088558cc4cbe79d2b3118b788ffe10ee3193b0b"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952060 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" event={"ID":"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d","Type":"ContainerStarted","Data":"0431aba998a138e4e652857d48109dcd0ab658ada2ed78a9b9d19a37d5f31910"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952070 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vnnj7" event={"ID":"565d1a27-65a5-420c-bd5c-6041e0bc7c2c","Type":"ContainerStarted","Data":"dea15f22a0bea59b311e9521c3bbd6616b38a6677dbbc7d7566a2c3b9fef9627"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952080 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vnnj7" event={"ID":"565d1a27-65a5-420c-bd5c-6041e0bc7c2c","Type":"ContainerStarted","Data":"27996aba4673273e9fc84ee676fa4439427be5d37e49935565b954630c9470cf"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952089 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-v6fbz" event={"ID":"c5367e25-e2b2-4280-8e56-ad67c088c382","Type":"ContainerStarted","Data":"e7bfd8d7d723e535576fe23722b2fca478ffae3300060a474d71d392166647bf"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952099 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" event={"ID":"839bb9d6-4528-484e-9e12-ea749b5e177c","Type":"ContainerStarted","Data":"95d1f306632761c88ccda8b14fe4d818e584031364b84dfdfd329d176f8f53d1"} Jan 21 10:08:26 crc kubenswrapper[4684]: I0121 10:08:26.952905 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:26 crc kubenswrapper[4684]: E0121 10:08:26.965174 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.465155199 +0000 UTC m=+145.223238156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.057023 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.057595 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.557571125 +0000 UTC m=+145.315654102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.162661 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.163562 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.663544635 +0000 UTC m=+145.421627602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.263957 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.264313 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.764295379 +0000 UTC m=+145.522378346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.311495 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:27 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:27 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:27 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.311566 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.371828 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.372226 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.872209397 +0000 UTC m=+145.630292364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.372634 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.475425 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.476257 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:27.976236398 +0000 UTC m=+145.734319365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.580972 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.581452 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.08143832 +0000 UTC m=+145.839521287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.651493 4684 patch_prober.go:28] interesting pod/console-operator-58897d9998-jgwzk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.657309 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" podUID="634c4722-8d7d-463a-9844-8213116a4ce7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.682048 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.682709 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.182683591 +0000 UTC m=+145.940766548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.783996 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.784558 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.284539064 +0000 UTC m=+146.042622031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.887996 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.888591 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.388567965 +0000 UTC m=+146.146650932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.943258 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" event={"ID":"6c455bb4-0425-4c3e-8ad4-e49ffe9dd69a","Type":"ContainerStarted","Data":"85cda2f7498991ee6f2d6547503134d1fffca13a50883dfdc233059bf8f4fc41"} Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.973809 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" event={"ID":"804798e6-979c-46ae-bc4b-e6bd55ce6e9b","Type":"ContainerStarted","Data":"e27c6ddd988091378b0929f6ba131a87da510c2415bfc267925ffedb9e94bf2f"} Jan 21 10:08:27 crc kubenswrapper[4684]: I0121 10:08:27.989756 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:27 crc kubenswrapper[4684]: E0121 10:08:27.992230 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.492213392 +0000 UTC m=+146.250296359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.008079 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-72t62" podStartSLOduration=126.008053926 podStartE2EDuration="2m6.008053926s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.005661561 +0000 UTC m=+145.763744528" watchObservedRunningTime="2026-01-21 10:08:28.008053926 +0000 UTC m=+145.766136893" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.016278 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" event={"ID":"ecc3b392-1d71-4544-93ac-c93169931c41","Type":"ContainerStarted","Data":"7910453cefd6c46850bba118c8d3af5411d9e7a90ea289b215e0f138918e1674"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.038391 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" event={"ID":"cd90b7f0-d328-4cfb-ad0d-83cf58bee09d","Type":"ContainerStarted","Data":"be7d32cca9411d692135a34ce79c5ad55a3b483be977e1e4f5f105a4dc37e42d"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.068096 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-64j7q" podStartSLOduration=126.068074638 podStartE2EDuration="2m6.068074638s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.062667845 +0000 UTC m=+145.820750812" watchObservedRunningTime="2026-01-21 10:08:28.068074638 +0000 UTC m=+145.826157605" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.076951 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" event={"ID":"366b5359-ea1c-4565-ae7b-53a71eadfa3e","Type":"ContainerStarted","Data":"4ead35b617fcf222026927f793dc26cf3422d5ff97fa836d4048f24ac8e6f5af"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.092046 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.093450 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.593429432 +0000 UTC m=+146.351512399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.098944 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qjbj4" event={"ID":"d0eb2234-baac-4251-8bf4-631c7407c09f","Type":"ContainerStarted","Data":"ec6896585c8c0f28a57312463a87692afee8527d751ae46db43f6efbb1015cfd"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.131976 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" event={"ID":"510de763-cfdd-4464-a9d6-d061ad7b88bf","Type":"ContainerStarted","Data":"a4d7193437b05b47036ebcecc3a645b9f6e9e988ddf407d2008032f315798d09"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.134384 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-dq2dv" podStartSLOduration=126.134350511 podStartE2EDuration="2m6.134350511s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.13289742 +0000 UTC m=+145.890980387" watchObservedRunningTime="2026-01-21 10:08:28.134350511 +0000 UTC m=+145.892433478" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.142236 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" event={"ID":"2a16e132-7543-4642-bd55-b3abf288e009","Type":"ContainerStarted","Data":"5ab3a0bd2d2081eb869067f65f0f0fb1ba99f2eaf5b18da3f1796299f8907b7f"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.160674 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" event={"ID":"6b3f6c72-88f5-4e25-a2dc-fbc18e402afc","Type":"ContainerStarted","Data":"8c1c19b28bd892ccb47743df7672e6352010d46d66688b0d6bd876d05ea38b38"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.189850 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" event={"ID":"728bf40a-9dfc-4850-a3cf-2b506c0fe68c","Type":"ContainerStarted","Data":"a1e48d2a1814444ed0de534cb24b3535714f765b03aa495b757fd382dd3700b1"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.198676 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.201447 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.701426424 +0000 UTC m=+146.459509601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.245696 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" event={"ID":"4f0d0b06-b45f-4c0b-9ce3-bcc3b0da6c75","Type":"ContainerStarted","Data":"e64d88e37d9726de7ae1bb35fabf731c115c6c7f70cb285bdfc57a8c44942cd6"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.264638 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" event={"ID":"839bb9d6-4528-484e-9e12-ea749b5e177c","Type":"ContainerStarted","Data":"bbe9e357dc44635c6110c8bc831738ac7b5a1f5e755c73679606d05358ec4bfd"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.290792 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hvhlb" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.301204 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.304094 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.804066994 +0000 UTC m=+146.562149961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.305928 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:28 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:28 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:28 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.306008 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.316689 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" podStartSLOduration=126.316663174 podStartE2EDuration="2m6.316663174s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.234062847 +0000 UTC m=+145.992145814" watchObservedRunningTime="2026-01-21 10:08:28.316663174 +0000 UTC m=+146.074746141" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.317119 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" event={"ID":"0d9a24c5-ed0d-473e-a562-ee476c663ed5","Type":"ContainerStarted","Data":"2b27983258e981aa92bea6c0af7a271e08689c42d6b40b2b47a5c2ddbf8ec43a"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.357880 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mnmmn" event={"ID":"7957fca4-e890-434e-bf3f-d77a83b1c99d","Type":"ContainerStarted","Data":"17a24685fb5256b8a96e72602ebc72c51c551cff3bf67f38c5ee726f03963fd8"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.358725 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.388869 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vn2nz" podStartSLOduration=126.388843458 podStartE2EDuration="2m6.388843458s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.324597127 +0000 UTC m=+146.082680094" watchObservedRunningTime="2026-01-21 10:08:28.388843458 +0000 UTC m=+146.146926415" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.389258 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c5hr7" event={"ID":"6f6dc594-89a5-4c0f-9132-296c2e028c7b","Type":"ContainerStarted","Data":"1a3565387cba9fdcb6ff4855384f032b5d6423d3888cb564dc72e4eddb7d70c4"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.389987 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7khjp" podStartSLOduration=126.389980119 podStartE2EDuration="2m6.389980119s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.389310915 +0000 UTC m=+146.147393882" watchObservedRunningTime="2026-01-21 10:08:28.389980119 +0000 UTC m=+146.148063086" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.403319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.405130 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:28.905115169 +0000 UTC m=+146.663198136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.416845 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" event={"ID":"603ee3d6-73a1-4796-a857-84f9c889b3af","Type":"ContainerStarted","Data":"5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.418773 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.435837 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wfjd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.436171 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.470703 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" event={"ID":"e6097625-6479-4b86-b888-78d72d1c7089","Type":"ContainerStarted","Data":"b902644c36950cabaa08c73e87992075cd14b4023969c6a11f4ea0c57cfd29e6"} Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.487582 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lvmjt" podStartSLOduration=126.487547919 podStartE2EDuration="2m6.487547919s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.48591748 +0000 UTC m=+146.244000447" watchObservedRunningTime="2026-01-21 10:08:28.487547919 +0000 UTC m=+146.245630886" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.504968 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.507456 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.007431528 +0000 UTC m=+146.765514505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.515848 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.516231 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.016213032 +0000 UTC m=+146.774295999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.621140 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.622377 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.122337737 +0000 UTC m=+146.880420704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.635098 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-98cfq" podStartSLOduration=126.63505961 podStartE2EDuration="2m6.63505961s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.601653069 +0000 UTC m=+146.359736036" watchObservedRunningTime="2026-01-21 10:08:28.63505961 +0000 UTC m=+146.393142577" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.651908 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jgwzk" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.725069 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.725689 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.225671673 +0000 UTC m=+146.983754640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.753751 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mnmmn" podStartSLOduration=10.753723683 podStartE2EDuration="10.753723683s" podCreationTimestamp="2026-01-21 10:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.688208946 +0000 UTC m=+146.446291913" watchObservedRunningTime="2026-01-21 10:08:28.753723683 +0000 UTC m=+146.511806650" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.755900 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podStartSLOduration=126.75588086 podStartE2EDuration="2m6.75588086s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.753000597 +0000 UTC m=+146.511083574" watchObservedRunningTime="2026-01-21 10:08:28.75588086 +0000 UTC m=+146.513963827" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.784503 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-x97lq" podStartSLOduration=126.78448225 podStartE2EDuration="2m6.78448225s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:28.778849589 +0000 UTC m=+146.536932576" watchObservedRunningTime="2026-01-21 10:08:28.78448225 +0000 UTC m=+146.542565217" Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.826197 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.826907 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.326887123 +0000 UTC m=+147.084970090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:28 crc kubenswrapper[4684]: I0121 10:08:28.928811 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:28 crc kubenswrapper[4684]: E0121 10:08:28.929280 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.429262764 +0000 UTC m=+147.187345731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.032467 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.032864 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.532844638 +0000 UTC m=+147.290927605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.033638 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2q67n"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.035011 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.049797 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.111624 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q67n"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.140205 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-utilities\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.140267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.140314 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-catalog-content\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.140336 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbtv\" (UniqueName: \"kubernetes.io/projected/e08a2ff1-2079-4571-9c86-3167ce7e20d6-kube-api-access-4lbtv\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.146123 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.646101388 +0000 UTC m=+147.404184345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.226004 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dd9dl" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.242955 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.243198 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-catalog-content\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.243245 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.743203761 +0000 UTC m=+147.501286778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.243333 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lbtv\" (UniqueName: \"kubernetes.io/projected/e08a2ff1-2079-4571-9c86-3167ce7e20d6-kube-api-access-4lbtv\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.243645 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-utilities\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.243750 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-catalog-content\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.243754 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.244140 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.744131315 +0000 UTC m=+147.502214282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.244133 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-utilities\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.315815 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:29 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:29 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:29 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.316481 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.325309 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cnf2q"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.326353 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.348514 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.348683 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.848654162 +0000 UTC m=+147.606737129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.348785 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.348829 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-utilities\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.348861 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzr4j\" (UniqueName: \"kubernetes.io/projected/2b89446d-5402-4130-b07f-cdd46b6e3d5d-kube-api-access-zzr4j\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.348917 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-catalog-content\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.349281 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.849265244 +0000 UTC m=+147.607348201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.350381 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.403986 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnf2q"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.432565 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lbtv\" (UniqueName: \"kubernetes.io/projected/e08a2ff1-2079-4571-9c86-3167ce7e20d6-kube-api-access-4lbtv\") pod \"community-operators-2q67n\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.450641 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.450874 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.950834407 +0000 UTC m=+147.708917384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.450947 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.451003 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-utilities\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.451034 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzr4j\" (UniqueName: \"kubernetes.io/projected/2b89446d-5402-4130-b07f-cdd46b6e3d5d-kube-api-access-zzr4j\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.451079 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-catalog-content\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.451330 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:29.951322945 +0000 UTC m=+147.709405912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.451675 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-utilities\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.451705 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-catalog-content\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.485741 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mnmmn" event={"ID":"7957fca4-e890-434e-bf3f-d77a83b1c99d","Type":"ContainerStarted","Data":"a626b87bc100db7e0a24559e54f59db171fde2e8ff51d4138115d61915db1d75"} Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.493966 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" event={"ID":"804798e6-979c-46ae-bc4b-e6bd55ce6e9b","Type":"ContainerStarted","Data":"a621a0fe33830195fc96d109bb82e6a73a882879801e8cc356a2e849b945f199"} Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.496317 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" event={"ID":"843f3aca-bc65-445d-bed9-afbaa8a052d7","Type":"ContainerStarted","Data":"d3370554aa9e7324e7fadfa93824a18a26d083f95ddcb3960494c20deecff2a1"} Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.497869 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9x6qf" event={"ID":"510de763-cfdd-4464-a9d6-d061ad7b88bf","Type":"ContainerStarted","Data":"b3da428c2ed7831b8bdd3c79459ec1bd3ab1ab4fe6ee4167150b894fc7a15215"} Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.501339 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wfjd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.501392 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.502101 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" event={"ID":"0d9a24c5-ed0d-473e-a562-ee476c663ed5","Type":"ContainerStarted","Data":"cb248ec92b3d8d00cfe98969f89e1076cd96f0de53ead44b29cc54a7f40fb582"} Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.515965 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sclvq"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.517068 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.552099 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.552214 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzr4j\" (UniqueName: \"kubernetes.io/projected/2b89446d-5402-4130-b07f-cdd46b6e3d5d-kube-api-access-zzr4j\") pod \"certified-operators-cnf2q\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.552551 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-catalog-content\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.553139 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745vk\" (UniqueName: \"kubernetes.io/projected/7e209352-f09e-4ce0-a8a7-87bd39e1e962-kube-api-access-745vk\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.553268 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-utilities\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.553725 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.053695916 +0000 UTC m=+147.811778883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.595343 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sclvq"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.636292 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vq4pn"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.660508 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.660729 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.662252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.662345 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745vk\" (UniqueName: \"kubernetes.io/projected/7e209352-f09e-4ce0-a8a7-87bd39e1e962-kube-api-access-745vk\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.662402 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-utilities\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.662486 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-catalog-content\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.663208 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-catalog-content\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.664416 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.164399024 +0000 UTC m=+147.922481991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.664425 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-utilities\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.665978 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.688381 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-jfblx" podStartSLOduration=127.688342168 podStartE2EDuration="2m7.688342168s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:29.643886913 +0000 UTC m=+147.401969880" watchObservedRunningTime="2026-01-21 10:08:29.688342168 +0000 UTC m=+147.446425135" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.723821 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq4pn"] Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.733930 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745vk\" (UniqueName: \"kubernetes.io/projected/7e209352-f09e-4ce0-a8a7-87bd39e1e962-kube-api-access-745vk\") pod \"community-operators-sclvq\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.767303 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.767752 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnxf7\" (UniqueName: \"kubernetes.io/projected/1d8d0a34-73e7-4fae-a317-4140bf78b26d-kube-api-access-wnxf7\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.767788 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-utilities\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.767810 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-catalog-content\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.767880 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.267861205 +0000 UTC m=+148.025944172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.808872 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-dg7k7" podStartSLOduration=127.808839096 podStartE2EDuration="2m7.808839096s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:29.734163423 +0000 UTC m=+147.492246390" watchObservedRunningTime="2026-01-21 10:08:29.808839096 +0000 UTC m=+147.566922063" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.837761 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869134 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnxf7\" (UniqueName: \"kubernetes.io/projected/1d8d0a34-73e7-4fae-a317-4140bf78b26d-kube-api-access-wnxf7\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869192 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869223 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-utilities\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869250 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-catalog-content\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869281 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869330 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869395 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.869454 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.871550 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-catalog-content\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.872543 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-utilities\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.873336 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.873393 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.373372008 +0000 UTC m=+148.131454975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.895469 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.896910 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.955784 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnxf7\" (UniqueName: \"kubernetes.io/projected/1d8d0a34-73e7-4fae-a317-4140bf78b26d-kube-api-access-wnxf7\") pod \"certified-operators-vq4pn\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.956806 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:29 crc kubenswrapper[4684]: I0121 10:08:29.978030 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:29 crc kubenswrapper[4684]: E0121 10:08:29.978513 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.478493247 +0000 UTC m=+148.236576214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.047795 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.066544 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.072729 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.077117 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.082334 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.082769 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.582754096 +0000 UTC m=+148.340837053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.184112 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.184522 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.684504166 +0000 UTC m=+148.442587133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.285684 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.286109 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.786089758 +0000 UTC m=+148.544172725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.311078 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:30 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:30 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:30 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.311169 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.388965 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.389349 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.889329911 +0000 UTC m=+148.647412878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.406526 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q67n"] Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.416162 4684 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 10:08:30 crc kubenswrapper[4684]: W0121 10:08:30.451553 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08a2ff1_2079_4571_9c86_3167ce7e20d6.slice/crio-1c85c1b60aed8813dd26999753ccc7eed2a687aae7c308f1fb3c7df6492d969d WatchSource:0}: Error finding container 1c85c1b60aed8813dd26999753ccc7eed2a687aae7c308f1fb3c7df6492d969d: Status 404 returned error can't find the container with id 1c85c1b60aed8813dd26999753ccc7eed2a687aae7c308f1fb3c7df6492d969d Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.494043 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.494516 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:30.994500992 +0000 UTC m=+148.752583959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.595010 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.595573 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:31.095553696 +0000 UTC m=+148.853636653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.632051 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rnkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.632126 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rnkx" podUID="0a7f0592-7c89-496c-acb0-3ae031dbffb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.632501 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rnkx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.632523 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5rnkx" podUID="0a7f0592-7c89-496c-acb0-3ae031dbffb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.688449 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" event={"ID":"843f3aca-bc65-445d-bed9-afbaa8a052d7","Type":"ContainerStarted","Data":"6629c43cfbfd43d5e08ba84ab47a2e97e6e051c8a78ee5a394d599b953cbf0f2"} Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.692337 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wfjd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.692419 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.696976 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.697286 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:08:31.197274545 +0000 UTC m=+148.955357512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8k2vm" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.739687 4684 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T10:08:30.416207519Z","Handler":null,"Name":""} Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.773205 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.774230 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.798317 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:30 crc kubenswrapper[4684]: E0121 10:08:30.800898 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:08:31.30087005 +0000 UTC m=+149.058953017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.812935 4684 patch_prober.go:28] interesting pod/apiserver-76f77b778f-b5z5t container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]log ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]etcd ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/max-in-flight-filter ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 10:08:30 crc kubenswrapper[4684]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-startinformers ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 10:08:30 crc kubenswrapper[4684]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 10:08:30 crc kubenswrapper[4684]: livez check failed Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.813013 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" podUID="9379cfa5-977e-46d6-b841-f1b8c99a6c75" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.813244 4684 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.813268 4684 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.833242 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cnf2q"] Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.843726 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.843806 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.864447 4684 patch_prober.go:28] interesting pod/console-f9d7485db-x6zks container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.864523 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x6zks" podUID="d1cc5379-f931-4e88-b422-4ecc9e33e2a0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 21 10:08:30 crc kubenswrapper[4684]: W0121 10:08:30.897975 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b89446d_5402_4130_b07f_cdd46b6e3d5d.slice/crio-d05d9292c3448fde8425c255ac1621347c9256fddcd1e5395c178b0b82c0f291 WatchSource:0}: Error finding container d05d9292c3448fde8425c255ac1621347c9256fddcd1e5395c178b0b82c0f291: Status 404 returned error can't find the container with id d05d9292c3448fde8425c255ac1621347c9256fddcd1e5395c178b0b82c0f291 Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.900802 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.930784 4684 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:08:30 crc kubenswrapper[4684]: I0121 10:08:30.930830 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.072137 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sclvq"] Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.216116 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8k2vm\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.222632 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gs98q"] Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.224238 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.227198 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.233052 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs98q"] Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.276230 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.309295 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.316122 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:31 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:31 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:31 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.316190 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.374781 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.382568 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgb2d\" (UniqueName: \"kubernetes.io/projected/0e22e72a-79a2-45ea-8093-17f56c2f1748-kube-api-access-dgb2d\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.382622 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-catalog-content\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.382657 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-utilities\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.422013 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.484528 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgb2d\" (UniqueName: \"kubernetes.io/projected/0e22e72a-79a2-45ea-8093-17f56c2f1748-kube-api-access-dgb2d\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.484587 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-catalog-content\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.484613 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-utilities\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.485226 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-utilities\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.485791 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-catalog-content\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.551987 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgb2d\" (UniqueName: \"kubernetes.io/projected/0e22e72a-79a2-45ea-8093-17f56c2f1748-kube-api-access-dgb2d\") pod \"redhat-marketplace-gs98q\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.599546 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vs72"] Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.601387 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.602665 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq4pn"] Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.614347 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vs72"] Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.690028 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.690741 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-utilities\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.690796 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-catalog-content\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.690867 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4xt\" (UniqueName: \"kubernetes.io/projected/1e37eeae-7608-474f-8346-3a3add011e41-kube-api-access-tf4xt\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.757426 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ff30a9bd22157e6a1682e7c2d435b11833adb9be54f3a5c0c8b15c2136df46d6"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.765293 4684 generic.go:334] "Generic (PLEG): container finished" podID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerID="77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6" exitCode=0 Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.765713 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerDied","Data":"77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.765778 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerStarted","Data":"1c85c1b60aed8813dd26999753ccc7eed2a687aae7c308f1fb3c7df6492d969d"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.796319 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf4xt\" (UniqueName: \"kubernetes.io/projected/1e37eeae-7608-474f-8346-3a3add011e41-kube-api-access-tf4xt\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.796436 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-utilities\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.796458 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-catalog-content\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.796879 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-catalog-content\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.797374 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-utilities\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.816910 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.831937 4684 generic.go:334] "Generic (PLEG): container finished" podID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerID="1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504" exitCode=0 Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.832013 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerDied","Data":"1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.832054 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerStarted","Data":"4c8a741955fee42c1f56c1774630038a25e412dbb91dcea21470efb10a1c4026"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.836257 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf4xt\" (UniqueName: \"kubernetes.io/projected/1e37eeae-7608-474f-8346-3a3add011e41-kube-api-access-tf4xt\") pod \"redhat-marketplace-4vs72\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.837618 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerStarted","Data":"66e6ffc3f069b5694c190fcbb6071c424186fa9c6d70809ccc612e430231a350"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.854830 4684 generic.go:334] "Generic (PLEG): container finished" podID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerID="8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047" exitCode=0 Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.854935 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerDied","Data":"8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.854969 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerStarted","Data":"d05d9292c3448fde8425c255ac1621347c9256fddcd1e5395c178b0b82c0f291"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.871657 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.901326 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" event={"ID":"843f3aca-bc65-445d-bed9-afbaa8a052d7","Type":"ContainerStarted","Data":"c78723a4e9ce1c5dd42b237b86a8085ca6cd8a111b78b65261a45a5a85e761b0"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.920283 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0d1d8b16964fb2051862d58d8729acfd86ebee6750315bb7dc4a64786a9dc051"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.924099 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0d0c69f6d2fb87f173a07091bef49b54f3548aac89c5a1e8114678c564d8a5fd"} Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.931315 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:08:31 crc kubenswrapper[4684]: I0121 10:08:31.933076 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8k2vm"] Jan 21 10:08:31 crc kubenswrapper[4684]: W0121 10:08:31.952069 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79b7756e_38ae_4b6c_acc0_b4c0f825b3ad.slice/crio-6871f9ad1f607ff61d102f10265b65e3c44e1dcff0df229e75249b66b27476ad WatchSource:0}: Error finding container 6871f9ad1f607ff61d102f10265b65e3c44e1dcff0df229e75249b66b27476ad: Status 404 returned error can't find the container with id 6871f9ad1f607ff61d102f10265b65e3c44e1dcff0df229e75249b66b27476ad Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.183244 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqrkg"] Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.185194 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.187843 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.194539 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqrkg"] Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.206868 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs98q"] Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.302399 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:32 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:32 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:32 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.302525 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.303965 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-utilities\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.304080 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5scz\" (UniqueName: \"kubernetes.io/projected/b0b6cdf0-27ef-4701-b0d2-4b877b043253-kube-api-access-t5scz\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.304111 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-catalog-content\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.310481 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vs72"] Jan 21 10:08:32 crc kubenswrapper[4684]: W0121 10:08:32.321500 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e37eeae_7608_474f_8346_3a3add011e41.slice/crio-68b810803c66fce5e22f8528e689e2f5acca9b98f391d655a9ad4a44da000c51 WatchSource:0}: Error finding container 68b810803c66fce5e22f8528e689e2f5acca9b98f391d655a9ad4a44da000c51: Status 404 returned error can't find the container with id 68b810803c66fce5e22f8528e689e2f5acca9b98f391d655a9ad4a44da000c51 Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.405213 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5scz\" (UniqueName: \"kubernetes.io/projected/b0b6cdf0-27ef-4701-b0d2-4b877b043253-kube-api-access-t5scz\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.405772 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-catalog-content\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.405854 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-utilities\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.406489 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-utilities\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.407089 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-catalog-content\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.446349 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5scz\" (UniqueName: \"kubernetes.io/projected/b0b6cdf0-27ef-4701-b0d2-4b877b043253-kube-api-access-t5scz\") pod \"redhat-operators-qqrkg\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.541249 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.577650 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.597072 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mm22r"] Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.598398 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.624142 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm22r"] Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.713671 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-utilities\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.713732 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhczt\" (UniqueName: \"kubernetes.io/projected/264a10c7-1457-42d2-95ae-71588cc2a4a3-kube-api-access-zhczt\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.713792 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-catalog-content\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.819282 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-utilities\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.819951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhczt\" (UniqueName: \"kubernetes.io/projected/264a10c7-1457-42d2-95ae-71588cc2a4a3-kube-api-access-zhczt\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.820006 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-catalog-content\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.820689 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-catalog-content\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.820689 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-utilities\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.853830 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhczt\" (UniqueName: \"kubernetes.io/projected/264a10c7-1457-42d2-95ae-71588cc2a4a3-kube-api-access-zhczt\") pod \"redhat-operators-mm22r\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:32 crc kubenswrapper[4684]: I0121 10:08:32.983975 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.001845 4684 generic.go:334] "Generic (PLEG): container finished" podID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerID="aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071" exitCode=0 Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.001961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerDied","Data":"aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.002002 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerStarted","Data":"ab3c3b01aef550018189716130c7b923a65445e00a9bd8b0aa575e0b09eede59"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.009439 4684 generic.go:334] "Generic (PLEG): container finished" podID="34c69792-c774-40f9-a0de-d6f1a6f82714" containerID="392b7419a03d980ad1336dc178d203186f5ce7cd936f5cbb08d149a772961c37" exitCode=0 Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.009518 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" event={"ID":"34c69792-c774-40f9-a0de-d6f1a6f82714","Type":"ContainerDied","Data":"392b7419a03d980ad1336dc178d203186f5ce7cd936f5cbb08d149a772961c37"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.012563 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"afe702bfb19ffca01c0417005432678890ab76629bb96213ee0ff800f851b4a4"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.017828 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e37eeae-7608-474f-8346-3a3add011e41" containerID="3b1633ecf0d357f96c423fa3cc1d3ed298dadd5318c670a0dc84437c0802051d" exitCode=0 Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.017918 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vs72" event={"ID":"1e37eeae-7608-474f-8346-3a3add011e41","Type":"ContainerDied","Data":"3b1633ecf0d357f96c423fa3cc1d3ed298dadd5318c670a0dc84437c0802051d"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.017949 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vs72" event={"ID":"1e37eeae-7608-474f-8346-3a3add011e41","Type":"ContainerStarted","Data":"68b810803c66fce5e22f8528e689e2f5acca9b98f391d655a9ad4a44da000c51"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.059983 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" event={"ID":"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad","Type":"ContainerStarted","Data":"c056e47f5eca2d60a43f1542b5fd7b483bbd0c5901d8ade48408d6ecacabcd27"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.061058 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" event={"ID":"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad","Type":"ContainerStarted","Data":"6871f9ad1f607ff61d102f10265b65e3c44e1dcff0df229e75249b66b27476ad"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.061139 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.064489 4684 generic.go:334] "Generic (PLEG): container finished" podID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerID="06ed12401df37077f065ce3c14906c32ff3754e4fc5b3d2a91a45786d20790f0" exitCode=0 Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.064545 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerDied","Data":"06ed12401df37077f065ce3c14906c32ff3754e4fc5b3d2a91a45786d20790f0"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.087527 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a24b6d99f2963491ce20a40aca8a4f13b901f33339ac0c98f1d19bf6961912b0"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.139667 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" event={"ID":"843f3aca-bc65-445d-bed9-afbaa8a052d7","Type":"ContainerStarted","Data":"40e6bd66ba9718291a070aba35a1749d350c07d07e84e258cdc44ae77bc268cb"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.173216 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"52e133c5227ae4da42c862abfc4b6383764abcf06d4c585d1f1c65507d34d17e"} Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.174735 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.214866 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" podStartSLOduration=131.21483827 podStartE2EDuration="2m11.21483827s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:33.208883377 +0000 UTC m=+150.966966344" watchObservedRunningTime="2026-01-21 10:08:33.21483827 +0000 UTC m=+150.972921237" Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.216718 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqrkg"] Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.260630 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-s5mkm" podStartSLOduration=15.260600942 podStartE2EDuration="15.260600942s" podCreationTimestamp="2026-01-21 10:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:33.25801982 +0000 UTC m=+151.016102787" watchObservedRunningTime="2026-01-21 10:08:33.260600942 +0000 UTC m=+151.018683899" Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.300653 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:33 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:33 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:33 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.300764 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:33 crc kubenswrapper[4684]: I0121 10:08:33.492842 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mm22r"] Jan 21 10:08:33 crc kubenswrapper[4684]: W0121 10:08:33.534198 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod264a10c7_1457_42d2_95ae_71588cc2a4a3.slice/crio-0870f1175bb019f05c42c89d86847dbfc560926f6d5ce1edb7afc2b08e088020 WatchSource:0}: Error finding container 0870f1175bb019f05c42c89d86847dbfc560926f6d5ce1edb7afc2b08e088020: Status 404 returned error can't find the container with id 0870f1175bb019f05c42c89d86847dbfc560926f6d5ce1edb7afc2b08e088020 Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.086522 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.088240 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.095567 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.100160 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.102407 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.153289 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf98c583-e817-4ccc-b778-397c5eb532f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.153635 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf98c583-e817-4ccc-b778-397c5eb532f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.196892 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerStarted","Data":"098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb"} Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.196981 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerStarted","Data":"0870f1175bb019f05c42c89d86847dbfc560926f6d5ce1edb7afc2b08e088020"} Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.206060 4684 generic.go:334] "Generic (PLEG): container finished" podID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerID="3e9690697645ed9cabdf2034ae868b6ea06d52540ff4238b0fe64381e1496285" exitCode=0 Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.208068 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerDied","Data":"3e9690697645ed9cabdf2034ae868b6ea06d52540ff4238b0fe64381e1496285"} Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.208125 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerStarted","Data":"cd6736fd4d48f00b17cf7a1313c0cf1a960f11a7b68e8f9953316c3ee229c61b"} Jan 21 10:08:34 crc kubenswrapper[4684]: E0121 10:08:34.236587 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod264a10c7_1457_42d2_95ae_71588cc2a4a3.slice/crio-098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.254467 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf98c583-e817-4ccc-b778-397c5eb532f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.254679 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf98c583-e817-4ccc-b778-397c5eb532f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.256404 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf98c583-e817-4ccc-b778-397c5eb532f9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.282516 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf98c583-e817-4ccc-b778-397c5eb532f9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.309666 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:34 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:34 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:34 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.309760 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.423623 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.625504 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.763541 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c69792-c774-40f9-a0de-d6f1a6f82714-config-volume\") pod \"34c69792-c774-40f9-a0de-d6f1a6f82714\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.763630 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4ngz\" (UniqueName: \"kubernetes.io/projected/34c69792-c774-40f9-a0de-d6f1a6f82714-kube-api-access-x4ngz\") pod \"34c69792-c774-40f9-a0de-d6f1a6f82714\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.763692 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c69792-c774-40f9-a0de-d6f1a6f82714-secret-volume\") pod \"34c69792-c774-40f9-a0de-d6f1a6f82714\" (UID: \"34c69792-c774-40f9-a0de-d6f1a6f82714\") " Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.765140 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34c69792-c774-40f9-a0de-d6f1a6f82714-config-volume" (OuterVolumeSpecName: "config-volume") pod "34c69792-c774-40f9-a0de-d6f1a6f82714" (UID: "34c69792-c774-40f9-a0de-d6f1a6f82714"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.775773 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34c69792-c774-40f9-a0de-d6f1a6f82714-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34c69792-c774-40f9-a0de-d6f1a6f82714" (UID: "34c69792-c774-40f9-a0de-d6f1a6f82714"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.776337 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c69792-c774-40f9-a0de-d6f1a6f82714-kube-api-access-x4ngz" (OuterVolumeSpecName: "kube-api-access-x4ngz") pod "34c69792-c774-40f9-a0de-d6f1a6f82714" (UID: "34c69792-c774-40f9-a0de-d6f1a6f82714"). InnerVolumeSpecName "kube-api-access-x4ngz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.843345 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.869344 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34c69792-c774-40f9-a0de-d6f1a6f82714-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.869860 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4ngz\" (UniqueName: \"kubernetes.io/projected/34c69792-c774-40f9-a0de-d6f1a6f82714-kube-api-access-x4ngz\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:34 crc kubenswrapper[4684]: I0121 10:08:34.869875 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34c69792-c774-40f9-a0de-d6f1a6f82714-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.237430 4684 generic.go:334] "Generic (PLEG): container finished" podID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerID="098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb" exitCode=0 Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.237572 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerDied","Data":"098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb"} Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.259891 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" event={"ID":"34c69792-c774-40f9-a0de-d6f1a6f82714","Type":"ContainerDied","Data":"46e4215bf57151971d6622e7f2597def2d73229893abce5bafc25cca2cd28edf"} Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.259944 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46e4215bf57151971d6622e7f2597def2d73229893abce5bafc25cca2cd28edf" Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.260043 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb" Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.271108 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf98c583-e817-4ccc-b778-397c5eb532f9","Type":"ContainerStarted","Data":"046b4b6a6e639cac028112c1d244e3d3f1d6c6f98af9d335a69316123afe098c"} Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.299895 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:35 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:35 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:35 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.300014 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.777584 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:35 crc kubenswrapper[4684]: I0121 10:08:35.783125 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-b5z5t" Jan 21 10:08:36 crc kubenswrapper[4684]: I0121 10:08:36.290217 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf98c583-e817-4ccc-b778-397c5eb532f9","Type":"ContainerStarted","Data":"31b3c43c8e76b30d3aba849b0de8258d45f07d65fe643ed99e5bc8f0102e6980"} Jan 21 10:08:36 crc kubenswrapper[4684]: I0121 10:08:36.301675 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:36 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:36 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:36 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:36 crc kubenswrapper[4684]: I0121 10:08:36.301751 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:36 crc kubenswrapper[4684]: I0121 10:08:36.309037 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.309022183 podStartE2EDuration="2.309022183s" podCreationTimestamp="2026-01-21 10:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:08:36.304475701 +0000 UTC m=+154.062558668" watchObservedRunningTime="2026-01-21 10:08:36.309022183 +0000 UTC m=+154.067105150" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.302521 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.302583 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.302949 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:37 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:37 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:37 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.302968 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.341122 4684 generic.go:334] "Generic (PLEG): container finished" podID="bf98c583-e817-4ccc-b778-397c5eb532f9" containerID="31b3c43c8e76b30d3aba849b0de8258d45f07d65fe643ed99e5bc8f0102e6980" exitCode=0 Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.341180 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf98c583-e817-4ccc-b778-397c5eb532f9","Type":"ContainerDied","Data":"31b3c43c8e76b30d3aba849b0de8258d45f07d65fe643ed99e5bc8f0102e6980"} Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.407477 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:08:37 crc kubenswrapper[4684]: E0121 10:08:37.407840 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c69792-c774-40f9-a0de-d6f1a6f82714" containerName="collect-profiles" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.407854 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c69792-c774-40f9-a0de-d6f1a6f82714" containerName="collect-profiles" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.407962 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c69792-c774-40f9-a0de-d6f1a6f82714" containerName="collect-profiles" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.408585 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.410826 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.424881 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.427656 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.432764 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.432835 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.534112 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.534682 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.534764 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.562389 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:37 crc kubenswrapper[4684]: I0121 10:08:37.767433 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:08:38 crc kubenswrapper[4684]: I0121 10:08:38.353595 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:38 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:38 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:38 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:38 crc kubenswrapper[4684]: I0121 10:08:38.354092 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:38 crc kubenswrapper[4684]: I0121 10:08:38.402880 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:08:39 crc kubenswrapper[4684]: I0121 10:08:39.302225 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:39 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:39 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:39 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:39 crc kubenswrapper[4684]: I0121 10:08:39.302318 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:39 crc kubenswrapper[4684]: I0121 10:08:39.714202 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mnmmn" Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.298750 4684 patch_prober.go:28] interesting pod/router-default-5444994796-6g6vs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:08:40 crc kubenswrapper[4684]: [-]has-synced failed: reason withheld Jan 21 10:08:40 crc kubenswrapper[4684]: [+]process-running ok Jan 21 10:08:40 crc kubenswrapper[4684]: healthz check failed Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.299078 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6g6vs" podUID="50a26307-43c9-4325-b992-91cac18db66c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.630571 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rnkx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.630629 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5rnkx" podUID="0a7f0592-7c89-496c-acb0-3ae031dbffb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.630960 4684 patch_prober.go:28] interesting pod/downloads-7954f5f757-5rnkx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.631046 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5rnkx" podUID="0a7f0592-7c89-496c-acb0-3ae031dbffb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.835004 4684 patch_prober.go:28] interesting pod/console-f9d7485db-x6zks container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 21 10:08:40 crc kubenswrapper[4684]: I0121 10:08:40.835076 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x6zks" podUID="d1cc5379-f931-4e88-b422-4ecc9e33e2a0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 21 10:08:41 crc kubenswrapper[4684]: I0121 10:08:41.299510 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:41 crc kubenswrapper[4684]: I0121 10:08:41.302702 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6g6vs" Jan 21 10:08:44 crc kubenswrapper[4684]: I0121 10:08:44.632139 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:44 crc kubenswrapper[4684]: I0121 10:08:44.654001 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49971ee3-e56a-4d50-8fc5-231bdcfc92d5-metrics-certs\") pod \"network-metrics-daemon-7wzh7\" (UID: \"49971ee3-e56a-4d50-8fc5-231bdcfc92d5\") " pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:44 crc kubenswrapper[4684]: I0121 10:08:44.735163 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7wzh7" Jan 21 10:08:50 crc kubenswrapper[4684]: I0121 10:08:50.645467 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5rnkx" Jan 21 10:08:50 crc kubenswrapper[4684]: I0121 10:08:50.838202 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:50 crc kubenswrapper[4684]: I0121 10:08:50.842738 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x6zks" Jan 21 10:08:51 crc kubenswrapper[4684]: I0121 10:08:51.386249 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:08:53 crc kubenswrapper[4684]: I0121 10:08:53.920237 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.009934 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf98c583-e817-4ccc-b778-397c5eb532f9-kubelet-dir\") pod \"bf98c583-e817-4ccc-b778-397c5eb532f9\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.010015 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf98c583-e817-4ccc-b778-397c5eb532f9-kube-api-access\") pod \"bf98c583-e817-4ccc-b778-397c5eb532f9\" (UID: \"bf98c583-e817-4ccc-b778-397c5eb532f9\") " Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.010112 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf98c583-e817-4ccc-b778-397c5eb532f9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf98c583-e817-4ccc-b778-397c5eb532f9" (UID: "bf98c583-e817-4ccc-b778-397c5eb532f9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.010523 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf98c583-e817-4ccc-b778-397c5eb532f9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.031938 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf98c583-e817-4ccc-b778-397c5eb532f9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf98c583-e817-4ccc-b778-397c5eb532f9" (UID: "bf98c583-e817-4ccc-b778-397c5eb532f9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.112051 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf98c583-e817-4ccc-b778-397c5eb532f9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.618941 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f","Type":"ContainerStarted","Data":"1ef7a72e5a613338e1b38c951d3ac66fd13e45f8f2e7a75724ecfbd8c323efd6"} Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.620604 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bf98c583-e817-4ccc-b778-397c5eb532f9","Type":"ContainerDied","Data":"046b4b6a6e639cac028112c1d244e3d3f1d6c6f98af9d335a69316123afe098c"} Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.620664 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="046b4b6a6e639cac028112c1d244e3d3f1d6c6f98af9d335a69316123afe098c" Jan 21 10:08:54 crc kubenswrapper[4684]: I0121 10:08:54.620772 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:09:01 crc kubenswrapper[4684]: I0121 10:09:01.041301 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dcnrm" Jan 21 10:09:06 crc kubenswrapper[4684]: E0121 10:09:06.381582 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:09:06 crc kubenswrapper[4684]: E0121 10:09:06.382439 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-745vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sclvq_openshift-marketplace(7e209352-f09e-4ce0-a8a7-87bd39e1e962): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:09:06 crc kubenswrapper[4684]: E0121 10:09:06.383674 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sclvq" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" Jan 21 10:09:07 crc kubenswrapper[4684]: I0121 10:09:07.302032 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:09:07 crc kubenswrapper[4684]: I0121 10:09:07.302394 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:09:10 crc kubenswrapper[4684]: I0121 10:09:10.078387 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.828813 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sclvq" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.921770 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.922024 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhczt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mm22r_openshift-marketplace(264a10c7-1457-42d2-95ae-71588cc2a4a3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.923321 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mm22r" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.927126 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.927377 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t5scz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qqrkg_openshift-marketplace(b0b6cdf0-27ef-4701-b0d2-4b877b043253): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:09:10 crc kubenswrapper[4684]: E0121 10:09:10.928532 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qqrkg" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.003350 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:09:11 crc kubenswrapper[4684]: E0121 10:09:11.003666 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf98c583-e817-4ccc-b778-397c5eb532f9" containerName="pruner" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.003681 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf98c583-e817-4ccc-b778-397c5eb532f9" containerName="pruner" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.003798 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf98c583-e817-4ccc-b778-397c5eb532f9" containerName="pruner" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.004310 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.019766 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.117948 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14b0093f-c965-4217-9fc7-85091f44e6e7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.118015 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14b0093f-c965-4217-9fc7-85091f44e6e7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.219929 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14b0093f-c965-4217-9fc7-85091f44e6e7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.220629 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14b0093f-c965-4217-9fc7-85091f44e6e7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.220725 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14b0093f-c965-4217-9fc7-85091f44e6e7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.251143 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14b0093f-c965-4217-9fc7-85091f44e6e7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:11 crc kubenswrapper[4684]: I0121 10:09:11.331641 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:12 crc kubenswrapper[4684]: E0121 10:09:12.464627 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mm22r" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" Jan 21 10:09:12 crc kubenswrapper[4684]: E0121 10:09:12.631488 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:09:12 crc kubenswrapper[4684]: E0121 10:09:12.631672 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf4xt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4vs72_openshift-marketplace(1e37eeae-7608-474f-8346-3a3add011e41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:09:12 crc kubenswrapper[4684]: E0121 10:09:12.632935 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4vs72" podUID="1e37eeae-7608-474f-8346-3a3add011e41" Jan 21 10:09:13 crc kubenswrapper[4684]: E0121 10:09:13.045559 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:09:13 crc kubenswrapper[4684]: E0121 10:09:13.046614 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dgb2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gs98q_openshift-marketplace(0e22e72a-79a2-45ea-8093-17f56c2f1748): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:09:13 crc kubenswrapper[4684]: E0121 10:09:13.048462 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gs98q" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.139851 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7wzh7"] Jan 21 10:09:13 crc kubenswrapper[4684]: W0121 10:09:13.144869 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49971ee3_e56a_4d50_8fc5_231bdcfc92d5.slice/crio-a443addc6d3f5ab6ec6cb51684b444c0d1d2ab0507f4312cc4230eeb0c60c8ac WatchSource:0}: Error finding container a443addc6d3f5ab6ec6cb51684b444c0d1d2ab0507f4312cc4230eeb0c60c8ac: Status 404 returned error can't find the container with id a443addc6d3f5ab6ec6cb51684b444c0d1d2ab0507f4312cc4230eeb0c60c8ac Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.232398 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.271843 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" event={"ID":"49971ee3-e56a-4d50-8fc5-231bdcfc92d5","Type":"ContainerStarted","Data":"a443addc6d3f5ab6ec6cb51684b444c0d1d2ab0507f4312cc4230eeb0c60c8ac"} Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.274580 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerStarted","Data":"eb12c8120cd14ac720e77f115236835d03c4a05fd26732fc8834a1b0f48c6563"} Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.280617 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerStarted","Data":"82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d"} Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.284065 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f","Type":"ContainerStarted","Data":"8ed4d2f7b59a34535414a298c39d7d58bde3aa6145957a9c0c3221b29d2ddd40"} Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.298903 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerStarted","Data":"9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e"} Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.300560 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14b0093f-c965-4217-9fc7-85091f44e6e7","Type":"ContainerStarted","Data":"df901eb8fcf5ac811658f3df59e9a07a1c33439f116352f7edd7c118b389b1bb"} Jan 21 10:09:13 crc kubenswrapper[4684]: E0121 10:09:13.302577 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gs98q" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" Jan 21 10:09:13 crc kubenswrapper[4684]: E0121 10:09:13.303249 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4vs72" podUID="1e37eeae-7608-474f-8346-3a3add011e41" Jan 21 10:09:13 crc kubenswrapper[4684]: I0121 10:09:13.310999 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=36.310855733 podStartE2EDuration="36.310855733s" podCreationTimestamp="2026-01-21 10:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:09:13.309520926 +0000 UTC m=+191.067603893" watchObservedRunningTime="2026-01-21 10:09:13.310855733 +0000 UTC m=+191.068938700" Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.308220 4684 generic.go:334] "Generic (PLEG): container finished" podID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerID="82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d" exitCode=0 Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.308352 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerDied","Data":"82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.310913 4684 generic.go:334] "Generic (PLEG): container finished" podID="22111ba9-f0e6-4866-8a5f-c44fa0cafc3f" containerID="8ed4d2f7b59a34535414a298c39d7d58bde3aa6145957a9c0c3221b29d2ddd40" exitCode=0 Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.311038 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f","Type":"ContainerDied","Data":"8ed4d2f7b59a34535414a298c39d7d58bde3aa6145957a9c0c3221b29d2ddd40"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.315847 4684 generic.go:334] "Generic (PLEG): container finished" podID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerID="9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e" exitCode=0 Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.315934 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerDied","Data":"9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.317731 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14b0093f-c965-4217-9fc7-85091f44e6e7","Type":"ContainerStarted","Data":"5a46e2051b9008e671e0a6a2934d278275ae93fb20dc95ab9ac229cb78ebd6e3"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.322047 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" event={"ID":"49971ee3-e56a-4d50-8fc5-231bdcfc92d5","Type":"ContainerStarted","Data":"7e43441116908df26ed3433cb0e7ca12d1b8968c5de61180aadc16e285b830b9"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.322078 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7wzh7" event={"ID":"49971ee3-e56a-4d50-8fc5-231bdcfc92d5","Type":"ContainerStarted","Data":"49d182a90957713071b27420687e69f50723b5d48be7de1d6a8e41e3889e699c"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.335307 4684 generic.go:334] "Generic (PLEG): container finished" podID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerID="eb12c8120cd14ac720e77f115236835d03c4a05fd26732fc8834a1b0f48c6563" exitCode=0 Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.335352 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerDied","Data":"eb12c8120cd14ac720e77f115236835d03c4a05fd26732fc8834a1b0f48c6563"} Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.348350 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=4.348322728 podStartE2EDuration="4.348322728s" podCreationTimestamp="2026-01-21 10:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:09:14.345644851 +0000 UTC m=+192.103727818" watchObservedRunningTime="2026-01-21 10:09:14.348322728 +0000 UTC m=+192.106405695" Jan 21 10:09:14 crc kubenswrapper[4684]: I0121 10:09:14.409148 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7wzh7" podStartSLOduration=172.409124866 podStartE2EDuration="2m52.409124866s" podCreationTimestamp="2026-01-21 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:09:14.406508873 +0000 UTC m=+192.164591850" watchObservedRunningTime="2026-01-21 10:09:14.409124866 +0000 UTC m=+192.167207833" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.343405 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerStarted","Data":"4ee444c72e238666aa8f22d6d429f99395917d2f9c39911513ba7aa8acd9c7c1"} Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.346063 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerStarted","Data":"11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b"} Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.348590 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerStarted","Data":"5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31"} Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.350289 4684 generic.go:334] "Generic (PLEG): container finished" podID="14b0093f-c965-4217-9fc7-85091f44e6e7" containerID="5a46e2051b9008e671e0a6a2934d278275ae93fb20dc95ab9ac229cb78ebd6e3" exitCode=0 Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.350439 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14b0093f-c965-4217-9fc7-85091f44e6e7","Type":"ContainerDied","Data":"5a46e2051b9008e671e0a6a2934d278275ae93fb20dc95ab9ac229cb78ebd6e3"} Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.366577 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vq4pn" podStartSLOduration=4.617910576 podStartE2EDuration="46.366554025s" podCreationTimestamp="2026-01-21 10:08:29 +0000 UTC" firstStartedPulling="2026-01-21 10:08:33.067221905 +0000 UTC m=+150.825304872" lastFinishedPulling="2026-01-21 10:09:14.815865354 +0000 UTC m=+192.573948321" observedRunningTime="2026-01-21 10:09:15.360467798 +0000 UTC m=+193.118550765" watchObservedRunningTime="2026-01-21 10:09:15.366554025 +0000 UTC m=+193.124636992" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.379866 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cnf2q" podStartSLOduration=3.523807523 podStartE2EDuration="46.379843149s" podCreationTimestamp="2026-01-21 10:08:29 +0000 UTC" firstStartedPulling="2026-01-21 10:08:31.887196336 +0000 UTC m=+149.645279303" lastFinishedPulling="2026-01-21 10:09:14.743231972 +0000 UTC m=+192.501314929" observedRunningTime="2026-01-21 10:09:15.379435184 +0000 UTC m=+193.137518151" watchObservedRunningTime="2026-01-21 10:09:15.379843149 +0000 UTC m=+193.137926106" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.422428 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2q67n" podStartSLOduration=4.457800399 podStartE2EDuration="47.422395197s" podCreationTimestamp="2026-01-21 10:08:28 +0000 UTC" firstStartedPulling="2026-01-21 10:08:31.816607139 +0000 UTC m=+149.574690096" lastFinishedPulling="2026-01-21 10:09:14.781201927 +0000 UTC m=+192.539284894" observedRunningTime="2026-01-21 10:09:15.416394913 +0000 UTC m=+193.174477880" watchObservedRunningTime="2026-01-21 10:09:15.422395197 +0000 UTC m=+193.180478164" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.708139 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.785397 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kubelet-dir\") pod \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.785522 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kube-api-access\") pod \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\" (UID: \"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f\") " Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.785527 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22111ba9-f0e6-4866-8a5f-c44fa0cafc3f" (UID: "22111ba9-f0e6-4866-8a5f-c44fa0cafc3f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.785724 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.798785 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22111ba9-f0e6-4866-8a5f-c44fa0cafc3f" (UID: "22111ba9-f0e6-4866-8a5f-c44fa0cafc3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:15 crc kubenswrapper[4684]: I0121 10:09:15.887619 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22111ba9-f0e6-4866-8a5f-c44fa0cafc3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.360147 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"22111ba9-f0e6-4866-8a5f-c44fa0cafc3f","Type":"ContainerDied","Data":"1ef7a72e5a613338e1b38c951d3ac66fd13e45f8f2e7a75724ecfbd8c323efd6"} Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.360684 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef7a72e5a613338e1b38c951d3ac66fd13e45f8f2e7a75724ecfbd8c323efd6" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.360209 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.604244 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:09:16 crc kubenswrapper[4684]: E0121 10:09:16.604522 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22111ba9-f0e6-4866-8a5f-c44fa0cafc3f" containerName="pruner" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.604535 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="22111ba9-f0e6-4866-8a5f-c44fa0cafc3f" containerName="pruner" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.604639 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="22111ba9-f0e6-4866-8a5f-c44fa0cafc3f" containerName="pruner" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.605027 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.616349 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.704720 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-var-lock\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.704832 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.704923 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.749900 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.806826 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14b0093f-c965-4217-9fc7-85091f44e6e7-kubelet-dir\") pod \"14b0093f-c965-4217-9fc7-85091f44e6e7\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.807033 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14b0093f-c965-4217-9fc7-85091f44e6e7-kube-api-access\") pod \"14b0093f-c965-4217-9fc7-85091f44e6e7\" (UID: \"14b0093f-c965-4217-9fc7-85091f44e6e7\") " Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.808119 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14b0093f-c965-4217-9fc7-85091f44e6e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14b0093f-c965-4217-9fc7-85091f44e6e7" (UID: "14b0093f-c965-4217-9fc7-85091f44e6e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.809956 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.810262 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-var-lock\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.810352 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.810458 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14b0093f-c965-4217-9fc7-85091f44e6e7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.810547 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.810637 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-var-lock\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.829906 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b0093f-c965-4217-9fc7-85091f44e6e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14b0093f-c965-4217-9fc7-85091f44e6e7" (UID: "14b0093f-c965-4217-9fc7-85091f44e6e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.830149 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.912240 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14b0093f-c965-4217-9fc7-85091f44e6e7-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:16 crc kubenswrapper[4684]: I0121 10:09:16.929838 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:17 crc kubenswrapper[4684]: I0121 10:09:17.340667 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:09:17 crc kubenswrapper[4684]: I0121 10:09:17.374274 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"14b0093f-c965-4217-9fc7-85091f44e6e7","Type":"ContainerDied","Data":"df901eb8fcf5ac811658f3df59e9a07a1c33439f116352f7edd7c118b389b1bb"} Jan 21 10:09:17 crc kubenswrapper[4684]: I0121 10:09:17.374350 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df901eb8fcf5ac811658f3df59e9a07a1c33439f116352f7edd7c118b389b1bb" Jan 21 10:09:17 crc kubenswrapper[4684]: I0121 10:09:17.374495 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:09:17 crc kubenswrapper[4684]: I0121 10:09:17.386095 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6512b4c-f7d5-420f-8c1c-866c1406dd7f","Type":"ContainerStarted","Data":"ad17140c86b6d28555e8bdd0ff3fabcd6ae93148ee79556abcc2123520c91b7c"} Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.402420 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6512b4c-f7d5-420f-8c1c-866c1406dd7f","Type":"ContainerStarted","Data":"8552d149ad4a71ad5794460397a9dbd91364d34a8d323ae7d766ac92849cf107"} Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.420776 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.420750389 podStartE2EDuration="3.420750389s" podCreationTimestamp="2026-01-21 10:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:09:19.418924473 +0000 UTC m=+197.177007440" watchObservedRunningTime="2026-01-21 10:09:19.420750389 +0000 UTC m=+197.178833356" Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.664313 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.664770 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.666830 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.667107 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.739290 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:09:19 crc kubenswrapper[4684]: I0121 10:09:19.740174 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:09:20 crc kubenswrapper[4684]: I0121 10:09:20.078912 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:09:20 crc kubenswrapper[4684]: I0121 10:09:20.078976 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:09:20 crc kubenswrapper[4684]: I0121 10:09:20.122122 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:09:20 crc kubenswrapper[4684]: I0121 10:09:20.450881 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:09:20 crc kubenswrapper[4684]: I0121 10:09:20.451593 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:09:20 crc kubenswrapper[4684]: I0121 10:09:20.455505 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:09:21 crc kubenswrapper[4684]: I0121 10:09:21.121423 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vq4pn"] Jan 21 10:09:22 crc kubenswrapper[4684]: I0121 10:09:22.417302 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vq4pn" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="registry-server" containerID="cri-o://4ee444c72e238666aa8f22d6d429f99395917d2f9c39911513ba7aa8acd9c7c1" gracePeriod=2 Jan 21 10:09:23 crc kubenswrapper[4684]: I0121 10:09:23.427854 4684 generic.go:334] "Generic (PLEG): container finished" podID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerID="4ee444c72e238666aa8f22d6d429f99395917d2f9c39911513ba7aa8acd9c7c1" exitCode=0 Jan 21 10:09:23 crc kubenswrapper[4684]: I0121 10:09:23.427913 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerDied","Data":"4ee444c72e238666aa8f22d6d429f99395917d2f9c39911513ba7aa8acd9c7c1"} Jan 21 10:09:23 crc kubenswrapper[4684]: I0121 10:09:23.923348 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.037348 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnxf7\" (UniqueName: \"kubernetes.io/projected/1d8d0a34-73e7-4fae-a317-4140bf78b26d-kube-api-access-wnxf7\") pod \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.037557 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-catalog-content\") pod \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.037713 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-utilities\") pod \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\" (UID: \"1d8d0a34-73e7-4fae-a317-4140bf78b26d\") " Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.038986 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-utilities" (OuterVolumeSpecName: "utilities") pod "1d8d0a34-73e7-4fae-a317-4140bf78b26d" (UID: "1d8d0a34-73e7-4fae-a317-4140bf78b26d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.045595 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8d0a34-73e7-4fae-a317-4140bf78b26d-kube-api-access-wnxf7" (OuterVolumeSpecName: "kube-api-access-wnxf7") pod "1d8d0a34-73e7-4fae-a317-4140bf78b26d" (UID: "1d8d0a34-73e7-4fae-a317-4140bf78b26d"). InnerVolumeSpecName "kube-api-access-wnxf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.094997 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d8d0a34-73e7-4fae-a317-4140bf78b26d" (UID: "1d8d0a34-73e7-4fae-a317-4140bf78b26d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.139907 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnxf7\" (UniqueName: \"kubernetes.io/projected/1d8d0a34-73e7-4fae-a317-4140bf78b26d-kube-api-access-wnxf7\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.139987 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.139999 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8d0a34-73e7-4fae-a317-4140bf78b26d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.437154 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq4pn" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.437572 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq4pn" event={"ID":"1d8d0a34-73e7-4fae-a317-4140bf78b26d","Type":"ContainerDied","Data":"66e6ffc3f069b5694c190fcbb6071c424186fa9c6d70809ccc612e430231a350"} Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.437715 4684 scope.go:117] "RemoveContainer" containerID="4ee444c72e238666aa8f22d6d429f99395917d2f9c39911513ba7aa8acd9c7c1" Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.469935 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vq4pn"] Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.473925 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vq4pn"] Jan 21 10:09:24 crc kubenswrapper[4684]: I0121 10:09:24.524420 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" path="/var/lib/kubelet/pods/1d8d0a34-73e7-4fae-a317-4140bf78b26d/volumes" Jan 21 10:09:25 crc kubenswrapper[4684]: I0121 10:09:25.131820 4684 scope.go:117] "RemoveContainer" containerID="eb12c8120cd14ac720e77f115236835d03c4a05fd26732fc8834a1b0f48c6563" Jan 21 10:09:25 crc kubenswrapper[4684]: I0121 10:09:25.935500 4684 scope.go:117] "RemoveContainer" containerID="06ed12401df37077f065ce3c14906c32ff3754e4fc5b3d2a91a45786d20790f0" Jan 21 10:09:29 crc kubenswrapper[4684]: I0121 10:09:29.487612 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerStarted","Data":"e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7"} Jan 21 10:09:29 crc kubenswrapper[4684]: I0121 10:09:29.493500 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerStarted","Data":"c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7"} Jan 21 10:09:29 crc kubenswrapper[4684]: I0121 10:09:29.497449 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerStarted","Data":"eb24a311a16f2c9331a27378219053062995cbc1e001b408a486c28e1c03ed80"} Jan 21 10:09:29 crc kubenswrapper[4684]: I0121 10:09:29.499976 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerStarted","Data":"85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8"} Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.509146 4684 generic.go:334] "Generic (PLEG): container finished" podID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerID="e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7" exitCode=0 Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.509219 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerDied","Data":"e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7"} Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.520843 4684 generic.go:334] "Generic (PLEG): container finished" podID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerID="c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7" exitCode=0 Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.524474 4684 generic.go:334] "Generic (PLEG): container finished" podID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerID="eb24a311a16f2c9331a27378219053062995cbc1e001b408a486c28e1c03ed80" exitCode=0 Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.529092 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerDied","Data":"c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7"} Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.529166 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerDied","Data":"eb24a311a16f2c9331a27378219053062995cbc1e001b408a486c28e1c03ed80"} Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.547747 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e37eeae-7608-474f-8346-3a3add011e41" containerID="2b91dfbacdebf4441bcbbc169464652c23c8f4b0352a99374f8534cda5b3fcfe" exitCode=0 Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.547928 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vs72" event={"ID":"1e37eeae-7608-474f-8346-3a3add011e41","Type":"ContainerDied","Data":"2b91dfbacdebf4441bcbbc169464652c23c8f4b0352a99374f8534cda5b3fcfe"} Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.555058 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerDied","Data":"85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8"} Jan 21 10:09:30 crc kubenswrapper[4684]: I0121 10:09:30.558601 4684 generic.go:334] "Generic (PLEG): container finished" podID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerID="85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8" exitCode=0 Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.581659 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vs72" event={"ID":"1e37eeae-7608-474f-8346-3a3add011e41","Type":"ContainerStarted","Data":"d5bc4a508d64b8bb28a4bd7298772647d0415df945a48ea9bc908f3db6ce649b"} Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.585825 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerStarted","Data":"d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4"} Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.589759 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerStarted","Data":"c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b"} Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.593311 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerStarted","Data":"de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb"} Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.596444 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerStarted","Data":"d74931c1ff2881336cf1b55fc427105f721679cf38dd2768e7066ceb79091edf"} Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.605246 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vs72" podStartSLOduration=2.652347309 podStartE2EDuration="1m0.605225853s" podCreationTimestamp="2026-01-21 10:08:31 +0000 UTC" firstStartedPulling="2026-01-21 10:08:33.022263871 +0000 UTC m=+150.780346838" lastFinishedPulling="2026-01-21 10:09:30.975142415 +0000 UTC m=+208.733225382" observedRunningTime="2026-01-21 10:09:31.603053079 +0000 UTC m=+209.361136046" watchObservedRunningTime="2026-01-21 10:09:31.605225853 +0000 UTC m=+209.363308810" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.626919 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gs98q" podStartSLOduration=2.684150012 podStartE2EDuration="1m0.62689692s" podCreationTimestamp="2026-01-21 10:08:31 +0000 UTC" firstStartedPulling="2026-01-21 10:08:33.016589159 +0000 UTC m=+150.774672126" lastFinishedPulling="2026-01-21 10:09:30.959336067 +0000 UTC m=+208.717419034" observedRunningTime="2026-01-21 10:09:31.623917619 +0000 UTC m=+209.382000586" watchObservedRunningTime="2026-01-21 10:09:31.62689692 +0000 UTC m=+209.384979887" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.649380 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sclvq" podStartSLOduration=3.295190427 podStartE2EDuration="1m2.649339473s" podCreationTimestamp="2026-01-21 10:08:29 +0000 UTC" firstStartedPulling="2026-01-21 10:08:31.840465999 +0000 UTC m=+149.598548966" lastFinishedPulling="2026-01-21 10:09:31.194615045 +0000 UTC m=+208.952698012" observedRunningTime="2026-01-21 10:09:31.646223937 +0000 UTC m=+209.404306904" watchObservedRunningTime="2026-01-21 10:09:31.649339473 +0000 UTC m=+209.407422440" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.676849 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqrkg" podStartSLOduration=3.873886664 podStartE2EDuration="59.676826958s" podCreationTimestamp="2026-01-21 10:08:32 +0000 UTC" firstStartedPulling="2026-01-21 10:08:35.27662232 +0000 UTC m=+153.034705287" lastFinishedPulling="2026-01-21 10:09:31.079562614 +0000 UTC m=+208.837645581" observedRunningTime="2026-01-21 10:09:31.67250175 +0000 UTC m=+209.430584727" watchObservedRunningTime="2026-01-21 10:09:31.676826958 +0000 UTC m=+209.434909925" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.691825 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.691987 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.740310 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mm22r" podStartSLOduration=3.988486217 podStartE2EDuration="59.740283984s" podCreationTimestamp="2026-01-21 10:08:32 +0000 UTC" firstStartedPulling="2026-01-21 10:08:35.251808845 +0000 UTC m=+153.009891812" lastFinishedPulling="2026-01-21 10:09:31.003606612 +0000 UTC m=+208.761689579" observedRunningTime="2026-01-21 10:09:31.738910338 +0000 UTC m=+209.496993305" watchObservedRunningTime="2026-01-21 10:09:31.740283984 +0000 UTC m=+209.498366951" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.932460 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:09:31 crc kubenswrapper[4684]: I0121 10:09:31.932527 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:09:32 crc kubenswrapper[4684]: I0121 10:09:32.544590 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:09:32 crc kubenswrapper[4684]: I0121 10:09:32.544786 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:09:32 crc kubenswrapper[4684]: I0121 10:09:32.780908 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gs98q" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="registry-server" probeResult="failure" output=< Jan 21 10:09:32 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:09:32 crc kubenswrapper[4684]: > Jan 21 10:09:32 crc kubenswrapper[4684]: I0121 10:09:32.974720 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4vs72" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="registry-server" probeResult="failure" output=< Jan 21 10:09:32 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:09:32 crc kubenswrapper[4684]: > Jan 21 10:09:32 crc kubenswrapper[4684]: I0121 10:09:32.984561 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:09:32 crc kubenswrapper[4684]: I0121 10:09:32.984655 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:09:33 crc kubenswrapper[4684]: I0121 10:09:33.593521 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qqrkg" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="registry-server" probeResult="failure" output=< Jan 21 10:09:33 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:09:33 crc kubenswrapper[4684]: > Jan 21 10:09:34 crc kubenswrapper[4684]: I0121 10:09:34.035933 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mm22r" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="registry-server" probeResult="failure" output=< Jan 21 10:09:34 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:09:34 crc kubenswrapper[4684]: > Jan 21 10:09:37 crc kubenswrapper[4684]: I0121 10:09:37.302665 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:09:37 crc kubenswrapper[4684]: I0121 10:09:37.303082 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:09:37 crc kubenswrapper[4684]: I0121 10:09:37.303145 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:09:37 crc kubenswrapper[4684]: I0121 10:09:37.304004 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:09:37 crc kubenswrapper[4684]: I0121 10:09:37.304129 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335" gracePeriod=600 Jan 21 10:09:38 crc kubenswrapper[4684]: I0121 10:09:38.642355 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335" exitCode=0 Jan 21 10:09:38 crc kubenswrapper[4684]: I0121 10:09:38.642583 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335"} Jan 21 10:09:39 crc kubenswrapper[4684]: I0121 10:09:39.651699 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"55b5b3c030e129d333cf3f4376ff223084ac8839dda379f9382b41f6c6e1b483"} Jan 21 10:09:39 crc kubenswrapper[4684]: I0121 10:09:39.839589 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:09:39 crc kubenswrapper[4684]: I0121 10:09:39.839656 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:09:39 crc kubenswrapper[4684]: I0121 10:09:39.891725 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:09:40 crc kubenswrapper[4684]: I0121 10:09:40.710266 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:09:40 crc kubenswrapper[4684]: I0121 10:09:40.767506 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sclvq"] Jan 21 10:09:41 crc kubenswrapper[4684]: I0121 10:09:41.745052 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:09:41 crc kubenswrapper[4684]: I0121 10:09:41.793383 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:09:41 crc kubenswrapper[4684]: I0121 10:09:41.982079 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:09:42 crc kubenswrapper[4684]: I0121 10:09:42.028193 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:09:42 crc kubenswrapper[4684]: I0121 10:09:42.597990 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:09:42 crc kubenswrapper[4684]: I0121 10:09:42.664473 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:09:42 crc kubenswrapper[4684]: I0121 10:09:42.672499 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sclvq" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="registry-server" containerID="cri-o://d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4" gracePeriod=2 Jan 21 10:09:42 crc kubenswrapper[4684]: I0121 10:09:42.727206 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vs72"] Jan 21 10:09:43 crc kubenswrapper[4684]: I0121 10:09:43.026245 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:09:43 crc kubenswrapper[4684]: I0121 10:09:43.074875 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:09:43 crc kubenswrapper[4684]: I0121 10:09:43.678588 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vs72" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="registry-server" containerID="cri-o://d5bc4a508d64b8bb28a4bd7298772647d0415df945a48ea9bc908f3db6ce649b" gracePeriod=2 Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.680522 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.687027 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e37eeae-7608-474f-8346-3a3add011e41" containerID="d5bc4a508d64b8bb28a4bd7298772647d0415df945a48ea9bc908f3db6ce649b" exitCode=0 Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.687101 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vs72" event={"ID":"1e37eeae-7608-474f-8346-3a3add011e41","Type":"ContainerDied","Data":"d5bc4a508d64b8bb28a4bd7298772647d0415df945a48ea9bc908f3db6ce649b"} Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.689983 4684 generic.go:334] "Generic (PLEG): container finished" podID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerID="d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4" exitCode=0 Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.690048 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sclvq" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.690049 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerDied","Data":"d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4"} Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.690101 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sclvq" event={"ID":"7e209352-f09e-4ce0-a8a7-87bd39e1e962","Type":"ContainerDied","Data":"4c8a741955fee42c1f56c1774630038a25e412dbb91dcea21470efb10a1c4026"} Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.690130 4684 scope.go:117] "RemoveContainer" containerID="d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.714760 4684 scope.go:117] "RemoveContainer" containerID="85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.749701 4684 scope.go:117] "RemoveContainer" containerID="1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.775629 4684 scope.go:117] "RemoveContainer" containerID="d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4" Jan 21 10:09:44 crc kubenswrapper[4684]: E0121 10:09:44.776105 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4\": container with ID starting with d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4 not found: ID does not exist" containerID="d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.776157 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4"} err="failed to get container status \"d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4\": rpc error: code = NotFound desc = could not find container \"d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4\": container with ID starting with d54aef4e3f91f5a9573f72adae0db75af8c6465ff2fe97996869c657f737c1e4 not found: ID does not exist" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.776195 4684 scope.go:117] "RemoveContainer" containerID="85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8" Jan 21 10:09:44 crc kubenswrapper[4684]: E0121 10:09:44.776471 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8\": container with ID starting with 85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8 not found: ID does not exist" containerID="85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.776493 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8"} err="failed to get container status \"85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8\": rpc error: code = NotFound desc = could not find container \"85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8\": container with ID starting with 85edca8f1bf92def4bdbf22b8ed70a1ed72f88e2edec67c970c2ca0b175ad1e8 not found: ID does not exist" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.776506 4684 scope.go:117] "RemoveContainer" containerID="1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504" Jan 21 10:09:44 crc kubenswrapper[4684]: E0121 10:09:44.776663 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504\": container with ID starting with 1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504 not found: ID does not exist" containerID="1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.776686 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504"} err="failed to get container status \"1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504\": rpc error: code = NotFound desc = could not find container \"1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504\": container with ID starting with 1050bcbe81b22223c1581bda0a78f4f9c2f3e5f2a01d263546a9ab22c2145504 not found: ID does not exist" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.876814 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745vk\" (UniqueName: \"kubernetes.io/projected/7e209352-f09e-4ce0-a8a7-87bd39e1e962-kube-api-access-745vk\") pod \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.876921 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-catalog-content\") pod \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.877081 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-utilities\") pod \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\" (UID: \"7e209352-f09e-4ce0-a8a7-87bd39e1e962\") " Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.878031 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-utilities" (OuterVolumeSpecName: "utilities") pod "7e209352-f09e-4ce0-a8a7-87bd39e1e962" (UID: "7e209352-f09e-4ce0-a8a7-87bd39e1e962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.882740 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e209352-f09e-4ce0-a8a7-87bd39e1e962-kube-api-access-745vk" (OuterVolumeSpecName: "kube-api-access-745vk") pod "7e209352-f09e-4ce0-a8a7-87bd39e1e962" (UID: "7e209352-f09e-4ce0-a8a7-87bd39e1e962"). InnerVolumeSpecName "kube-api-access-745vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.933028 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e209352-f09e-4ce0-a8a7-87bd39e1e962" (UID: "7e209352-f09e-4ce0-a8a7-87bd39e1e962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.979625 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.979668 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e209352-f09e-4ce0-a8a7-87bd39e1e962-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:44 crc kubenswrapper[4684]: I0121 10:09:44.979686 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745vk\" (UniqueName: \"kubernetes.io/projected/7e209352-f09e-4ce0-a8a7-87bd39e1e962-kube-api-access-745vk\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:45 crc kubenswrapper[4684]: I0121 10:09:45.029446 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sclvq"] Jan 21 10:09:45 crc kubenswrapper[4684]: I0121 10:09:45.033485 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sclvq"] Jan 21 10:09:45 crc kubenswrapper[4684]: I0121 10:09:45.121827 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm22r"] Jan 21 10:09:45 crc kubenswrapper[4684]: I0121 10:09:45.122326 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mm22r" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="registry-server" containerID="cri-o://de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb" gracePeriod=2 Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.079482 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.195980 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf4xt\" (UniqueName: \"kubernetes.io/projected/1e37eeae-7608-474f-8346-3a3add011e41-kube-api-access-tf4xt\") pod \"1e37eeae-7608-474f-8346-3a3add011e41\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.196102 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-utilities\") pod \"1e37eeae-7608-474f-8346-3a3add011e41\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.196140 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-catalog-content\") pod \"1e37eeae-7608-474f-8346-3a3add011e41\" (UID: \"1e37eeae-7608-474f-8346-3a3add011e41\") " Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.198328 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-utilities" (OuterVolumeSpecName: "utilities") pod "1e37eeae-7608-474f-8346-3a3add011e41" (UID: "1e37eeae-7608-474f-8346-3a3add011e41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.202763 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e37eeae-7608-474f-8346-3a3add011e41-kube-api-access-tf4xt" (OuterVolumeSpecName: "kube-api-access-tf4xt") pod "1e37eeae-7608-474f-8346-3a3add011e41" (UID: "1e37eeae-7608-474f-8346-3a3add011e41"). InnerVolumeSpecName "kube-api-access-tf4xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.244648 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e37eeae-7608-474f-8346-3a3add011e41" (UID: "1e37eeae-7608-474f-8346-3a3add011e41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.297878 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.297924 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e37eeae-7608-474f-8346-3a3add011e41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.297938 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf4xt\" (UniqueName: \"kubernetes.io/projected/1e37eeae-7608-474f-8346-3a3add011e41-kube-api-access-tf4xt\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.523832 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" path="/var/lib/kubelet/pods/7e209352-f09e-4ce0-a8a7-87bd39e1e962/volumes" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.607268 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.707752 4684 generic.go:334] "Generic (PLEG): container finished" podID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerID="de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb" exitCode=0 Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.707838 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerDied","Data":"de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb"} Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.707891 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mm22r" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.707911 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mm22r" event={"ID":"264a10c7-1457-42d2-95ae-71588cc2a4a3","Type":"ContainerDied","Data":"0870f1175bb019f05c42c89d86847dbfc560926f6d5ce1edb7afc2b08e088020"} Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.707941 4684 scope.go:117] "RemoveContainer" containerID="de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.711216 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vs72" event={"ID":"1e37eeae-7608-474f-8346-3a3add011e41","Type":"ContainerDied","Data":"68b810803c66fce5e22f8528e689e2f5acca9b98f391d655a9ad4a44da000c51"} Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.711405 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vs72" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.729556 4684 scope.go:117] "RemoveContainer" containerID="c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.733094 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vs72"] Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.736308 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vs72"] Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.750725 4684 scope.go:117] "RemoveContainer" containerID="098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.765913 4684 scope.go:117] "RemoveContainer" containerID="de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb" Jan 21 10:09:46 crc kubenswrapper[4684]: E0121 10:09:46.766656 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb\": container with ID starting with de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb not found: ID does not exist" containerID="de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.766715 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb"} err="failed to get container status \"de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb\": rpc error: code = NotFound desc = could not find container \"de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb\": container with ID starting with de0f5a0ddb114f388ecc276f982032a59bdd7eb83bb291a4c618be048ea13afb not found: ID does not exist" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.766762 4684 scope.go:117] "RemoveContainer" containerID="c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7" Jan 21 10:09:46 crc kubenswrapper[4684]: E0121 10:09:46.767443 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7\": container with ID starting with c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7 not found: ID does not exist" containerID="c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.767506 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7"} err="failed to get container status \"c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7\": rpc error: code = NotFound desc = could not find container \"c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7\": container with ID starting with c51f60da12f60e6dedc6d068b9c648c2aa6679e3c4404758592fd171b25059c7 not found: ID does not exist" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.767553 4684 scope.go:117] "RemoveContainer" containerID="098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb" Jan 21 10:09:46 crc kubenswrapper[4684]: E0121 10:09:46.768342 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb\": container with ID starting with 098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb not found: ID does not exist" containerID="098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.768446 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb"} err="failed to get container status \"098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb\": rpc error: code = NotFound desc = could not find container \"098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb\": container with ID starting with 098c7be33f6dd9b8e65f3a2cb835c1773b8c7e914afd763cba3951f0a4d4adcb not found: ID does not exist" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.768503 4684 scope.go:117] "RemoveContainer" containerID="d5bc4a508d64b8bb28a4bd7298772647d0415df945a48ea9bc908f3db6ce649b" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.783329 4684 scope.go:117] "RemoveContainer" containerID="2b91dfbacdebf4441bcbbc169464652c23c8f4b0352a99374f8534cda5b3fcfe" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.802142 4684 scope.go:117] "RemoveContainer" containerID="3b1633ecf0d357f96c423fa3cc1d3ed298dadd5318c670a0dc84437c0802051d" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.806395 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhczt\" (UniqueName: \"kubernetes.io/projected/264a10c7-1457-42d2-95ae-71588cc2a4a3-kube-api-access-zhczt\") pod \"264a10c7-1457-42d2-95ae-71588cc2a4a3\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.806473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-utilities\") pod \"264a10c7-1457-42d2-95ae-71588cc2a4a3\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.806522 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-catalog-content\") pod \"264a10c7-1457-42d2-95ae-71588cc2a4a3\" (UID: \"264a10c7-1457-42d2-95ae-71588cc2a4a3\") " Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.807599 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-utilities" (OuterVolumeSpecName: "utilities") pod "264a10c7-1457-42d2-95ae-71588cc2a4a3" (UID: "264a10c7-1457-42d2-95ae-71588cc2a4a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.810730 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264a10c7-1457-42d2-95ae-71588cc2a4a3-kube-api-access-zhczt" (OuterVolumeSpecName: "kube-api-access-zhczt") pod "264a10c7-1457-42d2-95ae-71588cc2a4a3" (UID: "264a10c7-1457-42d2-95ae-71588cc2a4a3"). InnerVolumeSpecName "kube-api-access-zhczt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.908003 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhczt\" (UniqueName: \"kubernetes.io/projected/264a10c7-1457-42d2-95ae-71588cc2a4a3-kube-api-access-zhczt\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.908046 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:46 crc kubenswrapper[4684]: I0121 10:09:46.952336 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "264a10c7-1457-42d2-95ae-71588cc2a4a3" (UID: "264a10c7-1457-42d2-95ae-71588cc2a4a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:09:47 crc kubenswrapper[4684]: I0121 10:09:47.009793 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/264a10c7-1457-42d2-95ae-71588cc2a4a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:47 crc kubenswrapper[4684]: I0121 10:09:47.039127 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mm22r"] Jan 21 10:09:47 crc kubenswrapper[4684]: I0121 10:09:47.041814 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mm22r"] Jan 21 10:09:48 crc kubenswrapper[4684]: I0121 10:09:48.522983 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e37eeae-7608-474f-8346-3a3add011e41" path="/var/lib/kubelet/pods/1e37eeae-7608-474f-8346-3a3add011e41/volumes" Jan 21 10:09:48 crc kubenswrapper[4684]: I0121 10:09:48.524061 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" path="/var/lib/kubelet/pods/264a10c7-1457-42d2-95ae-71588cc2a4a3/volumes" Jan 21 10:09:50 crc kubenswrapper[4684]: I0121 10:09:50.401606 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-plndh"] Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.305078 4684 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.305899 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.305919 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.305932 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.305941 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.305961 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.305971 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.305984 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.305993 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306006 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306015 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306031 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b0093f-c965-4217-9fc7-85091f44e6e7" containerName="pruner" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306039 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b0093f-c965-4217-9fc7-85091f44e6e7" containerName="pruner" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306059 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306071 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306086 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306095 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306110 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306121 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="extract-utilities" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306133 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306147 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306168 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306177 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="extract-content" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306196 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306207 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.306221 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306229 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306414 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e37eeae-7608-474f-8346-3a3add011e41" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306437 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="264a10c7-1457-42d2-95ae-71588cc2a4a3" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306453 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b0093f-c965-4217-9fc7-85091f44e6e7" containerName="pruner" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306469 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e209352-f09e-4ce0-a8a7-87bd39e1e962" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.306486 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8d0a34-73e7-4fae-a317-4140bf78b26d" containerName="registry-server" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307175 4684 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307244 4684 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307352 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307457 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307472 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307482 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307491 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307509 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307518 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307531 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307539 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307552 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307561 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307576 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307584 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.307595 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307603 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307682 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034" gracePeriod=15 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307723 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753" gracePeriod=15 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307791 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307809 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307749 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62" gracePeriod=15 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307800 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2" gracePeriod=15 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307855 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0" gracePeriod=15 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.307830 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.308006 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.308046 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.308067 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.312322 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.437177 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.438018 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.438332 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.438673 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.438901 4684 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.438967 4684 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.439176 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.463150 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.463206 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.463252 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.464524 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.464700 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.464761 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.464786 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.464906 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.566297 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.566846 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.566432 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.566934 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.566903 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.566969 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567040 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567117 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567156 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567183 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567211 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567278 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567317 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567352 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567355 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.567401 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:56 crc kubenswrapper[4684]: E0121 10:09:56.640479 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.775319 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.777422 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.778735 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753" exitCode=0 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.778830 4684 scope.go:117] "RemoveContainer" containerID="f7658d977edfc61cf9a6b1231b1acab5a2fccb6c56a7283549b9fe07e36100e8" Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.778981 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2" exitCode=0 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.779113 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62" exitCode=0 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.779141 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0" exitCode=2 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.782848 4684 generic.go:334] "Generic (PLEG): container finished" podID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" containerID="8552d149ad4a71ad5794460397a9dbd91364d34a8d323ae7d766ac92849cf107" exitCode=0 Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.782942 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6512b4c-f7d5-420f-8c1c-866c1406dd7f","Type":"ContainerDied","Data":"8552d149ad4a71ad5794460397a9dbd91364d34a8d323ae7d766ac92849cf107"} Jan 21 10:09:56 crc kubenswrapper[4684]: I0121 10:09:56.783996 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:57 crc kubenswrapper[4684]: E0121 10:09:57.042171 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 21 10:09:57 crc kubenswrapper[4684]: I0121 10:09:57.796789 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:09:57 crc kubenswrapper[4684]: E0121 10:09:57.843935 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.192139 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.193629 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.396685 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kubelet-dir\") pod \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.403011 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kube-api-access\") pod \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.403061 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-var-lock\") pod \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\" (UID: \"f6512b4c-f7d5-420f-8c1c-866c1406dd7f\") " Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.398558 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6512b4c-f7d5-420f-8c1c-866c1406dd7f" (UID: "f6512b4c-f7d5-420f-8c1c-866c1406dd7f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.403618 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-var-lock" (OuterVolumeSpecName: "var-lock") pod "f6512b4c-f7d5-420f-8c1c-866c1406dd7f" (UID: "f6512b4c-f7d5-420f-8c1c-866c1406dd7f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.414439 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6512b4c-f7d5-420f-8c1c-866c1406dd7f" (UID: "f6512b4c-f7d5-420f-8c1c-866c1406dd7f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.504740 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.504795 4684 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.504805 4684 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6512b4c-f7d5-420f-8c1c-866c1406dd7f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.768266 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.769986 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.770867 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.771727 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.807062 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.807134 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6512b4c-f7d5-420f-8c1c-866c1406dd7f","Type":"ContainerDied","Data":"ad17140c86b6d28555e8bdd0ff3fabcd6ae93148ee79556abcc2123520c91b7c"} Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.807222 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad17140c86b6d28555e8bdd0ff3fabcd6ae93148ee79556abcc2123520c91b7c" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.809632 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.809802 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.809858 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.809852 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.809894 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.810027 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.810301 4684 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.810335 4684 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.810354 4684 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.812545 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.812643 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.813352 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.813609 4684 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034" exitCode=0 Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.813736 4684 scope.go:117] "RemoveContainer" containerID="f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.813795 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.835130 4684 scope.go:117] "RemoveContainer" containerID="eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.836123 4684 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.836341 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.860575 4684 scope.go:117] "RemoveContainer" containerID="f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.886473 4684 scope.go:117] "RemoveContainer" containerID="bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.908829 4684 scope.go:117] "RemoveContainer" containerID="4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.933389 4684 scope.go:117] "RemoveContainer" containerID="87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.969381 4684 scope.go:117] "RemoveContainer" containerID="f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753" Jan 21 10:09:58 crc kubenswrapper[4684]: E0121 10:09:58.970586 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\": container with ID starting with f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753 not found: ID does not exist" containerID="f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.970728 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753"} err="failed to get container status \"f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\": rpc error: code = NotFound desc = could not find container \"f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753\": container with ID starting with f09457a75de091731f6d77a080138a31e5e75c660404ecb25ee6dd565619e753 not found: ID does not exist" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.970802 4684 scope.go:117] "RemoveContainer" containerID="eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2" Jan 21 10:09:58 crc kubenswrapper[4684]: E0121 10:09:58.971390 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\": container with ID starting with eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2 not found: ID does not exist" containerID="eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.971551 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2"} err="failed to get container status \"eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\": rpc error: code = NotFound desc = could not find container \"eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2\": container with ID starting with eeaf55765ed18b929d6837a6a98387640a095c637b31746e98f06aeba207e9e2 not found: ID does not exist" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.971596 4684 scope.go:117] "RemoveContainer" containerID="f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62" Jan 21 10:09:58 crc kubenswrapper[4684]: E0121 10:09:58.972030 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\": container with ID starting with f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62 not found: ID does not exist" containerID="f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.972124 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62"} err="failed to get container status \"f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\": rpc error: code = NotFound desc = could not find container \"f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62\": container with ID starting with f8442f353b99a1bfcf1b984b3fb49216c22467607fc8c165cd65051cc92bcb62 not found: ID does not exist" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.972192 4684 scope.go:117] "RemoveContainer" containerID="bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0" Jan 21 10:09:58 crc kubenswrapper[4684]: E0121 10:09:58.973309 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\": container with ID starting with bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0 not found: ID does not exist" containerID="bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.973351 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0"} err="failed to get container status \"bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\": rpc error: code = NotFound desc = could not find container \"bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0\": container with ID starting with bf04e29446bbe793d792b11a997d51652f33c3197d25a05718119f276ef11df0 not found: ID does not exist" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.973400 4684 scope.go:117] "RemoveContainer" containerID="4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034" Jan 21 10:09:58 crc kubenswrapper[4684]: E0121 10:09:58.974185 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\": container with ID starting with 4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034 not found: ID does not exist" containerID="4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.974255 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034"} err="failed to get container status \"4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\": rpc error: code = NotFound desc = could not find container \"4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034\": container with ID starting with 4430722b0ca258ae2429de6564d05ff56f14d62d8da9d8dc0898f855639dd034 not found: ID does not exist" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.974297 4684 scope.go:117] "RemoveContainer" containerID="87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319" Jan 21 10:09:58 crc kubenswrapper[4684]: E0121 10:09:58.975051 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\": container with ID starting with 87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319 not found: ID does not exist" containerID="87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319" Jan 21 10:09:58 crc kubenswrapper[4684]: I0121 10:09:58.975107 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319"} err="failed to get container status \"87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\": rpc error: code = NotFound desc = could not find container \"87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319\": container with ID starting with 87ef27b509cdcb022f74d475264b49192b264b295fb7a786ca039215ce3cf319 not found: ID does not exist" Jan 21 10:09:59 crc kubenswrapper[4684]: E0121 10:09:59.445210 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Jan 21 10:10:00 crc kubenswrapper[4684]: I0121 10:10:00.526888 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 10:10:01 crc kubenswrapper[4684]: E0121 10:10:01.361384 4684 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:10:01 crc kubenswrapper[4684]: I0121 10:10:01.362465 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:10:01 crc kubenswrapper[4684]: E0121 10:10:01.395140 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb7373543ac29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:10:01.394474025 +0000 UTC m=+239.152557012,LastTimestamp:2026-01-21 10:10:01.394474025 +0000 UTC m=+239.152557012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:10:01 crc kubenswrapper[4684]: E0121 10:10:01.504073 4684 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb7373543ac29 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:10:01.394474025 +0000 UTC m=+239.152557012,LastTimestamp:2026-01-21 10:10:01.394474025 +0000 UTC m=+239.152557012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:10:01 crc kubenswrapper[4684]: I0121 10:10:01.838900 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2"} Jan 21 10:10:01 crc kubenswrapper[4684]: I0121 10:10:01.839392 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d92ab229f573f74cf2783bfa4c1f5992b6513e0631e54cede4f0bdf05559c914"} Jan 21 10:10:01 crc kubenswrapper[4684]: I0121 10:10:01.840114 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:01 crc kubenswrapper[4684]: E0121 10:10:01.840782 4684 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:10:02 crc kubenswrapper[4684]: I0121 10:10:02.517883 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:02 crc kubenswrapper[4684]: E0121 10:10:02.648453 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="6.4s" Jan 21 10:10:03 crc kubenswrapper[4684]: E0121 10:10:03.573485 4684 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" volumeName="registry-storage" Jan 21 10:10:09 crc kubenswrapper[4684]: E0121 10:10:09.051255 4684 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="7s" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.513877 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.515533 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.539278 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.539319 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:09 crc kubenswrapper[4684]: E0121 10:10:09.539923 4684 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.540862 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.893863 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d9c7b59627d57e392e6116c55ab567400735044a64f4f68decdbab811505ee57"} Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.894407 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"12be862d1d5c190fcd7708dc39b090e29ef5a2c1e420bce7710a6c55b327d2be"} Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.894821 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.894846 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:09 crc kubenswrapper[4684]: I0121 10:10:09.895588 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:09 crc kubenswrapper[4684]: E0121 10:10:09.895656 4684 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.714925 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:10:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:10:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:10:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:10:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.715972 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.716935 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.717790 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.718446 4684 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.718584 4684 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.905798 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.905867 4684 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e" exitCode=1 Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.905940 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e"} Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.906414 4684 scope.go:117] "RemoveContainer" containerID="c43e227a3f376712291ab1037d4365800bd28f722fb6254243be497b7081072e" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.907297 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.907982 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.910024 4684 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d9c7b59627d57e392e6116c55ab567400735044a64f4f68decdbab811505ee57" exitCode=0 Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.910064 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d9c7b59627d57e392e6116c55ab567400735044a64f4f68decdbab811505ee57"} Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.910493 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.910515 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.910824 4684 status_manager.go:851] "Failed to get status for pod" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:10 crc kubenswrapper[4684]: E0121 10:10:10.911167 4684 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:10 crc kubenswrapper[4684]: I0121 10:10:10.911427 4684 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 21 10:10:11 crc kubenswrapper[4684]: I0121 10:10:11.921942 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 10:10:11 crc kubenswrapper[4684]: I0121 10:10:11.922412 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8619feb6a598b5ae4a59d4735434b64b5180a8643eee141822c782e62b5b007"} Jan 21 10:10:11 crc kubenswrapper[4684]: I0121 10:10:11.926300 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"45fb6adbc3db54cffe7849efa880553e65e00b74a835772a228d9afd74020e83"} Jan 21 10:10:11 crc kubenswrapper[4684]: I0121 10:10:11.926373 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9315a0cae3965525861fc5d692aa93cabe3258760719e73406004c27fe488da7"} Jan 21 10:10:11 crc kubenswrapper[4684]: I0121 10:10:11.926390 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"23801ddecfdc8456fe734c952b6945b0c4fb87c86dff3881ceb30b154b53df8b"} Jan 21 10:10:12 crc kubenswrapper[4684]: I0121 10:10:12.225437 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:10:12 crc kubenswrapper[4684]: I0121 10:10:12.947477 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"33e876fda51e0e5df8e4bacc9d974b8f46764202b0e9e7488e96bee9031b1887"} Jan 21 10:10:12 crc kubenswrapper[4684]: I0121 10:10:12.948725 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ce20f24795830ca13ced9f87f438eb1eb61d8f9f67f652afdf2e914cc65d91c0"} Jan 21 10:10:12 crc kubenswrapper[4684]: I0121 10:10:12.948536 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:12 crc kubenswrapper[4684]: I0121 10:10:12.948952 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:13 crc kubenswrapper[4684]: I0121 10:10:13.096459 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:10:13 crc kubenswrapper[4684]: I0121 10:10:13.102081 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:10:14 crc kubenswrapper[4684]: I0121 10:10:14.541963 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:14 crc kubenswrapper[4684]: I0121 10:10:14.542568 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:14 crc kubenswrapper[4684]: I0121 10:10:14.554486 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.459614 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" podUID="af694d93-1240-4a87-a2fe-153bb2401143" containerName="oauth-openshift" containerID="cri-o://6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4" gracePeriod=15 Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.845981 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882159 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-audit-policies\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882234 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-trusted-ca-bundle\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882278 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-cliconfig\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882403 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af694d93-1240-4a87-a2fe-153bb2401143-audit-dir\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882443 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-ocp-branding-template\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882498 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-idp-0-file-data\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882536 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-session\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882534 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af694d93-1240-4a87-a2fe-153bb2401143-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882587 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-login\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882617 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-serving-cert\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882656 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-provider-selection\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882709 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-error\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882758 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-router-certs\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882809 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-service-ca\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.882859 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2g9f\" (UniqueName: \"kubernetes.io/projected/af694d93-1240-4a87-a2fe-153bb2401143-kube-api-access-q2g9f\") pod \"af694d93-1240-4a87-a2fe-153bb2401143\" (UID: \"af694d93-1240-4a87-a2fe-153bb2401143\") " Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.883178 4684 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af694d93-1240-4a87-a2fe-153bb2401143-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.883461 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.884041 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.884195 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.884730 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.890460 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.890962 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af694d93-1240-4a87-a2fe-153bb2401143-kube-api-access-q2g9f" (OuterVolumeSpecName: "kube-api-access-q2g9f") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "kube-api-access-q2g9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.891165 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.891262 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.891860 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.892244 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.892484 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.892770 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.894116 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "af694d93-1240-4a87-a2fe-153bb2401143" (UID: "af694d93-1240-4a87-a2fe-153bb2401143"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.969992 4684 generic.go:334] "Generic (PLEG): container finished" podID="af694d93-1240-4a87-a2fe-153bb2401143" containerID="6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4" exitCode=0 Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.970294 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.970190 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" event={"ID":"af694d93-1240-4a87-a2fe-153bb2401143","Type":"ContainerDied","Data":"6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4"} Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.970582 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-plndh" event={"ID":"af694d93-1240-4a87-a2fe-153bb2401143","Type":"ContainerDied","Data":"b0138d88a2d62021f9c030d1d37ffd881dc020407df9b147297ef764513a8908"} Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.970663 4684 scope.go:117] "RemoveContainer" containerID="6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.984831 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985006 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985112 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985197 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985289 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985428 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985527 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985629 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985743 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.985835 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2g9f\" (UniqueName: \"kubernetes.io/projected/af694d93-1240-4a87-a2fe-153bb2401143-kube-api-access-q2g9f\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.986264 4684 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.986381 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:15 crc kubenswrapper[4684]: I0121 10:10:15.986486 4684 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af694d93-1240-4a87-a2fe-153bb2401143-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:16 crc kubenswrapper[4684]: I0121 10:10:16.031606 4684 scope.go:117] "RemoveContainer" containerID="6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4" Jan 21 10:10:16 crc kubenswrapper[4684]: E0121 10:10:16.035536 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4\": container with ID starting with 6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4 not found: ID does not exist" containerID="6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4" Jan 21 10:10:16 crc kubenswrapper[4684]: I0121 10:10:16.035603 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4"} err="failed to get container status \"6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4\": rpc error: code = NotFound desc = could not find container \"6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4\": container with ID starting with 6d71a287adea5fcd561d5bcd32b7d5d0ceb279d070f492440382163f4d4708f4 not found: ID does not exist" Jan 21 10:10:17 crc kubenswrapper[4684]: I0121 10:10:17.960995 4684 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:18 crc kubenswrapper[4684]: I0121 10:10:18.994593 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:18 crc kubenswrapper[4684]: I0121 10:10:18.994725 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:18 crc kubenswrapper[4684]: I0121 10:10:18.994761 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:19 crc kubenswrapper[4684]: I0121 10:10:19.001684 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:19 crc kubenswrapper[4684]: I0121 10:10:19.004702 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7cdecea4-d610-4f2f-8675-327b877a6947" Jan 21 10:10:19 crc kubenswrapper[4684]: I0121 10:10:19.999470 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:19 crc kubenswrapper[4684]: I0121 10:10:19.999955 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:21 crc kubenswrapper[4684]: I0121 10:10:21.005907 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:21 crc kubenswrapper[4684]: I0121 10:10:21.005957 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:22 crc kubenswrapper[4684]: I0121 10:10:22.230810 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:10:22 crc kubenswrapper[4684]: I0121 10:10:22.533461 4684 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7cdecea4-d610-4f2f-8675-327b877a6947" Jan 21 10:10:27 crc kubenswrapper[4684]: I0121 10:10:27.643817 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 10:10:28 crc kubenswrapper[4684]: I0121 10:10:28.031014 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 10:10:28 crc kubenswrapper[4684]: I0121 10:10:28.192512 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:10:28 crc kubenswrapper[4684]: I0121 10:10:28.374266 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 10:10:28 crc kubenswrapper[4684]: I0121 10:10:28.409997 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 10:10:28 crc kubenswrapper[4684]: I0121 10:10:28.476833 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:10:28 crc kubenswrapper[4684]: I0121 10:10:28.745293 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.052408 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.122411 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.373914 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.391283 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.420829 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.484444 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.622329 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 10:10:29 crc kubenswrapper[4684]: I0121 10:10:29.649104 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.297599 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.336740 4684 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.349015 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.395116 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.402880 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.609998 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.682533 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.698971 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.718293 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.906463 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.921246 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.969193 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 10:10:30 crc kubenswrapper[4684]: I0121 10:10:30.998151 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.013787 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.089213 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.157461 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.245918 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.316809 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.399708 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.497860 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.677437 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.738464 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.810024 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.926392 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:10:31 crc kubenswrapper[4684]: I0121 10:10:31.949329 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.039271 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.150120 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.177481 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.206021 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.210436 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.377542 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.495727 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.496082 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.497816 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.517552 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.547869 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.671208 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.760383 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.797900 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.799769 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.866038 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.877861 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.878813 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.879675 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 10:10:32 crc kubenswrapper[4684]: I0121 10:10:32.992254 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.042857 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.052729 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.157756 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.286425 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.455406 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.472134 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.484312 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.521822 4684 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.567627 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.574118 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.637569 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.707531 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.710233 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.751524 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 10:10:33 crc kubenswrapper[4684]: I0121 10:10:33.891498 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.053391 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.105306 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.170765 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.180740 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.211944 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.218907 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.258643 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.304805 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.440766 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.602769 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.673202 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.686938 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.695469 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.762231 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.763644 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.830928 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.874524 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.892009 4684 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 10:10:34 crc kubenswrapper[4684]: I0121 10:10:34.982209 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.188943 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.202774 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.207308 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.274722 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.303848 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.396530 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.399293 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.409277 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.422149 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.489765 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.604288 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.730985 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.731527 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.752437 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.849894 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.925509 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.944797 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.964258 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:10:35 crc kubenswrapper[4684]: I0121 10:10:35.967950 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.018874 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.042461 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.124800 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.125871 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.198163 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.231200 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.431958 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.528633 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.579622 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.670245 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.675809 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.729721 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.752893 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.753064 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.784818 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.830250 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.838075 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.844466 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.867511 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.957774 4684 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 10:10:36 crc kubenswrapper[4684]: I0121 10:10:36.994493 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.004049 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.014423 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.072330 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.259653 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.278085 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.412939 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.413781 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.433294 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.472109 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.474929 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.490221 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.506914 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.589766 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.609238 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.619823 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.637083 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.686199 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.744853 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.894607 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.896598 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.931587 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.978784 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 10:10:37 crc kubenswrapper[4684]: I0121 10:10:37.982480 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.000861 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.258983 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.293992 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.332803 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.388789 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.395289 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.404901 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.427722 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.433327 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.489444 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.561527 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.639303 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.705614 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.718974 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.737706 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.737924 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.752034 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.895171 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.962708 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.983151 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 10:10:38 crc kubenswrapper[4684]: I0121 10:10:38.983200 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.139453 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.178324 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.180727 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.245145 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.251478 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.331519 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.400446 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.407940 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.507121 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.650848 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.671889 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.787948 4684 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796036 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-plndh","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796136 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-jk85f","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:10:39 crc kubenswrapper[4684]: E0121 10:10:39.796432 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af694d93-1240-4a87-a2fe-153bb2401143" containerName="oauth-openshift" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796453 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="af694d93-1240-4a87-a2fe-153bb2401143" containerName="oauth-openshift" Jan 21 10:10:39 crc kubenswrapper[4684]: E0121 10:10:39.796483 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" containerName="installer" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796497 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" containerName="installer" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796687 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="af694d93-1240-4a87-a2fe-153bb2401143" containerName="oauth-openshift" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796722 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6512b4c-f7d5-420f-8c1c-866c1406dd7f" containerName="installer" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796785 4684 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.796818 4684 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3d8e76ee-8f45-4723-92aa-8b4c9e20dd38" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.797536 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.802126 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.802335 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.802432 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.802481 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.802932 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.802966 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.803017 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.803454 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.803535 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.803456 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.803582 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.806406 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.806517 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.808936 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.812481 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.820531 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.820855 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821586 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tnzr\" (UniqueName: \"kubernetes.io/projected/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-kube-api-access-8tnzr\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821668 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821724 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821778 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821829 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821863 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-audit-dir\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821913 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821950 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-audit-policies\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.821984 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.822023 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.822058 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.822131 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.822184 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.822223 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.828896 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.832569 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.832529272 podStartE2EDuration="22.832529272s" podCreationTimestamp="2026-01-21 10:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:10:39.826315084 +0000 UTC m=+277.584398101" watchObservedRunningTime="2026-01-21 10:10:39.832529272 +0000 UTC m=+277.590612279" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.866154 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.893191 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.920755 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923626 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923669 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923700 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923721 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923743 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tnzr\" (UniqueName: \"kubernetes.io/projected/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-kube-api-access-8tnzr\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923775 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923802 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923828 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923858 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923876 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-audit-dir\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923901 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923916 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923934 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-audit-policies\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.923951 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.925385 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-service-ca\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.925534 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-cliconfig\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.925706 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.926637 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-audit-policies\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.926803 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-audit-dir\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.930459 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-error\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.931293 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.931447 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-serving-cert\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.931607 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-router-certs\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.931747 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.932168 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.946091 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-system-session\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.946762 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-v4-0-config-user-template-login\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:39 crc kubenswrapper[4684]: I0121 10:10:39.950844 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tnzr\" (UniqueName: \"kubernetes.io/projected/b383da37-e22b-4b3a-9e8b-f0bba8a29b60-kube-api-access-8tnzr\") pod \"oauth-openshift-96d6999f9-jk85f\" (UID: \"b383da37-e22b-4b3a-9e8b-f0bba8a29b60\") " pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.010592 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.053659 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.064288 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.133346 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.221290 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.365329 4684 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.365589 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2" gracePeriod=5 Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.383063 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.469386 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.477491 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.481177 4684 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.489024 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.526827 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af694d93-1240-4a87-a2fe-153bb2401143" path="/var/lib/kubelet/pods/af694d93-1240-4a87-a2fe-153bb2401143/volumes" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.613050 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-96d6999f9-jk85f"] Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.680111 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.716720 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.768240 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.919157 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.964631 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 10:10:40 crc kubenswrapper[4684]: I0121 10:10:40.983896 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.110326 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.137451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" event={"ID":"b383da37-e22b-4b3a-9e8b-f0bba8a29b60","Type":"ContainerStarted","Data":"5671c88d836a9cd29d8813b4c4b1be76f01e603c8589481f3f309a873846d20a"} Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.137508 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" event={"ID":"b383da37-e22b-4b3a-9e8b-f0bba8a29b60","Type":"ContainerStarted","Data":"30704d40e441d6644239d17f287769916848ca2c6ee0d63c8b082996fc8b1690"} Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.139100 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.164110 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" podStartSLOduration=51.164079589 podStartE2EDuration="51.164079589s" podCreationTimestamp="2026-01-21 10:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:10:41.156967851 +0000 UTC m=+278.915050828" watchObservedRunningTime="2026-01-21 10:10:41.164079589 +0000 UTC m=+278.922162576" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.192830 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.208859 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.295616 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.389111 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.399099 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.458125 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.488504 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.493509 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.533420 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.627388 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.634007 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.667957 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 10:10:41 crc kubenswrapper[4684]: I0121 10:10:41.676918 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-96d6999f9-jk85f" Jan 21 10:10:42 crc kubenswrapper[4684]: I0121 10:10:42.268698 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 10:10:42 crc kubenswrapper[4684]: I0121 10:10:42.299876 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:10:42 crc kubenswrapper[4684]: I0121 10:10:42.473313 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 10:10:42 crc kubenswrapper[4684]: I0121 10:10:42.640471 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 10:10:42 crc kubenswrapper[4684]: I0121 10:10:42.776420 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 10:10:43 crc kubenswrapper[4684]: I0121 10:10:43.113927 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 10:10:43 crc kubenswrapper[4684]: I0121 10:10:43.389053 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 10:10:43 crc kubenswrapper[4684]: I0121 10:10:43.533561 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 10:10:43 crc kubenswrapper[4684]: I0121 10:10:43.998824 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:10:44 crc kubenswrapper[4684]: I0121 10:10:44.005004 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 10:10:44 crc kubenswrapper[4684]: I0121 10:10:44.259043 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 10:10:44 crc kubenswrapper[4684]: I0121 10:10:44.309750 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 10:10:44 crc kubenswrapper[4684]: I0121 10:10:44.470784 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 10:10:44 crc kubenswrapper[4684]: I0121 10:10:44.602966 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 10:10:45 crc kubenswrapper[4684]: I0121 10:10:45.958870 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 10:10:45 crc kubenswrapper[4684]: I0121 10:10:45.959391 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020114 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020229 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020246 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020285 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020397 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020424 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020755 4684 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020852 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020912 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.020947 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.029923 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.121569 4684 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.121612 4684 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.121625 4684 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.121640 4684 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.168028 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.168100 4684 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2" exitCode=137 Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.168165 4684 scope.go:117] "RemoveContainer" containerID="46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.168181 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.202551 4684 scope.go:117] "RemoveContainer" containerID="46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2" Jan 21 10:10:46 crc kubenswrapper[4684]: E0121 10:10:46.203356 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2\": container with ID starting with 46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2 not found: ID does not exist" containerID="46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.203419 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2"} err="failed to get container status \"46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2\": rpc error: code = NotFound desc = could not find container \"46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2\": container with ID starting with 46bcdf0ca7f40d24b9f81da9defda455f31d4a05d5954583666f066c964846b2 not found: ID does not exist" Jan 21 10:10:46 crc kubenswrapper[4684]: I0121 10:10:46.522576 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 10:11:01 crc kubenswrapper[4684]: I0121 10:11:01.856168 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wfjd container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 21 10:11:01 crc kubenswrapper[4684]: I0121 10:11:01.856237 4684 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6wfjd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 21 10:11:01 crc kubenswrapper[4684]: I0121 10:11:01.857167 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 21 10:11:01 crc kubenswrapper[4684]: I0121 10:11:01.857223 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 21 10:11:02 crc kubenswrapper[4684]: I0121 10:11:02.266470 4684 generic.go:334] "Generic (PLEG): container finished" podID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerID="5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f" exitCode=0 Jan 21 10:11:02 crc kubenswrapper[4684]: I0121 10:11:02.266557 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" event={"ID":"603ee3d6-73a1-4796-a857-84f9c889b3af","Type":"ContainerDied","Data":"5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f"} Jan 21 10:11:02 crc kubenswrapper[4684]: I0121 10:11:02.267194 4684 scope.go:117] "RemoveContainer" containerID="5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f" Jan 21 10:11:02 crc kubenswrapper[4684]: I0121 10:11:02.382812 4684 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 10:11:03 crc kubenswrapper[4684]: I0121 10:11:03.276330 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" event={"ID":"603ee3d6-73a1-4796-a857-84f9c889b3af","Type":"ContainerStarted","Data":"faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc"} Jan 21 10:11:03 crc kubenswrapper[4684]: I0121 10:11:03.276701 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:11:03 crc kubenswrapper[4684]: I0121 10:11:03.279660 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:11:31 crc kubenswrapper[4684]: I0121 10:11:31.573606 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jmrsp"] Jan 21 10:11:31 crc kubenswrapper[4684]: I0121 10:11:31.575928 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" podUID="e9ebaa53-a616-4d3f-a69f-19da113978c3" containerName="controller-manager" containerID="cri-o://c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2" gracePeriod=30 Jan 21 10:11:31 crc kubenswrapper[4684]: I0121 10:11:31.672044 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf"] Jan 21 10:11:31 crc kubenswrapper[4684]: I0121 10:11:31.672320 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" podUID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" containerName="route-controller-manager" containerID="cri-o://2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796" gracePeriod=30 Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.038405 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.174185 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-client-ca\") pod \"e9ebaa53-a616-4d3f-a69f-19da113978c3\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.174343 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ebaa53-a616-4d3f-a69f-19da113978c3-serving-cert\") pod \"e9ebaa53-a616-4d3f-a69f-19da113978c3\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.174403 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knh99\" (UniqueName: \"kubernetes.io/projected/e9ebaa53-a616-4d3f-a69f-19da113978c3-kube-api-access-knh99\") pod \"e9ebaa53-a616-4d3f-a69f-19da113978c3\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.174438 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-proxy-ca-bundles\") pod \"e9ebaa53-a616-4d3f-a69f-19da113978c3\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.175272 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-client-ca" (OuterVolumeSpecName: "client-ca") pod "e9ebaa53-a616-4d3f-a69f-19da113978c3" (UID: "e9ebaa53-a616-4d3f-a69f-19da113978c3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.175464 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e9ebaa53-a616-4d3f-a69f-19da113978c3" (UID: "e9ebaa53-a616-4d3f-a69f-19da113978c3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.175758 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-config\") pod \"e9ebaa53-a616-4d3f-a69f-19da113978c3\" (UID: \"e9ebaa53-a616-4d3f-a69f-19da113978c3\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.176479 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.176515 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.176922 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-config" (OuterVolumeSpecName: "config") pod "e9ebaa53-a616-4d3f-a69f-19da113978c3" (UID: "e9ebaa53-a616-4d3f-a69f-19da113978c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.181426 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9ebaa53-a616-4d3f-a69f-19da113978c3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e9ebaa53-a616-4d3f-a69f-19da113978c3" (UID: "e9ebaa53-a616-4d3f-a69f-19da113978c3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.184119 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ebaa53-a616-4d3f-a69f-19da113978c3-kube-api-access-knh99" (OuterVolumeSpecName: "kube-api-access-knh99") pod "e9ebaa53-a616-4d3f-a69f-19da113978c3" (UID: "e9ebaa53-a616-4d3f-a69f-19da113978c3"). InnerVolumeSpecName "kube-api-access-knh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.223652 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277373 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km8vb\" (UniqueName: \"kubernetes.io/projected/5b70a09c-424c-4317-a719-e0dbb6eefe1b-kube-api-access-km8vb\") pod \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277531 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-config\") pod \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277610 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b70a09c-424c-4317-a719-e0dbb6eefe1b-serving-cert\") pod \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277646 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-client-ca\") pod \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\" (UID: \"5b70a09c-424c-4317-a719-e0dbb6eefe1b\") " Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277843 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ebaa53-a616-4d3f-a69f-19da113978c3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277862 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knh99\" (UniqueName: \"kubernetes.io/projected/e9ebaa53-a616-4d3f-a69f-19da113978c3-kube-api-access-knh99\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.277879 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ebaa53-a616-4d3f-a69f-19da113978c3-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.278649 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b70a09c-424c-4317-a719-e0dbb6eefe1b" (UID: "5b70a09c-424c-4317-a719-e0dbb6eefe1b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.278684 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-config" (OuterVolumeSpecName: "config") pod "5b70a09c-424c-4317-a719-e0dbb6eefe1b" (UID: "5b70a09c-424c-4317-a719-e0dbb6eefe1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.281885 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b70a09c-424c-4317-a719-e0dbb6eefe1b-kube-api-access-km8vb" (OuterVolumeSpecName: "kube-api-access-km8vb") pod "5b70a09c-424c-4317-a719-e0dbb6eefe1b" (UID: "5b70a09c-424c-4317-a719-e0dbb6eefe1b"). InnerVolumeSpecName "kube-api-access-km8vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.289422 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b70a09c-424c-4317-a719-e0dbb6eefe1b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b70a09c-424c-4317-a719-e0dbb6eefe1b" (UID: "5b70a09c-424c-4317-a719-e0dbb6eefe1b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.379005 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b70a09c-424c-4317-a719-e0dbb6eefe1b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.379125 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.379144 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km8vb\" (UniqueName: \"kubernetes.io/projected/5b70a09c-424c-4317-a719-e0dbb6eefe1b-kube-api-access-km8vb\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.379162 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b70a09c-424c-4317-a719-e0dbb6eefe1b-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.472522 4684 generic.go:334] "Generic (PLEG): container finished" podID="e9ebaa53-a616-4d3f-a69f-19da113978c3" containerID="c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2" exitCode=0 Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.472668 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" event={"ID":"e9ebaa53-a616-4d3f-a69f-19da113978c3","Type":"ContainerDied","Data":"c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2"} Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.472707 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" event={"ID":"e9ebaa53-a616-4d3f-a69f-19da113978c3","Type":"ContainerDied","Data":"8f91f62ed7f27f210afbf10654fc06da91447ec2d101e31c06ba1e30c92f96c9"} Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.472720 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jmrsp" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.472732 4684 scope.go:117] "RemoveContainer" containerID="c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.475419 4684 generic.go:334] "Generic (PLEG): container finished" podID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" containerID="2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796" exitCode=0 Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.475542 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.475517 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" event={"ID":"5b70a09c-424c-4317-a719-e0dbb6eefe1b","Type":"ContainerDied","Data":"2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796"} Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.475638 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf" event={"ID":"5b70a09c-424c-4317-a719-e0dbb6eefe1b","Type":"ContainerDied","Data":"020d0c6e69d052e4dadc85918b23254c0b38766c54db2dcffd6a961cb6d565a8"} Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.491599 4684 scope.go:117] "RemoveContainer" containerID="c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2" Jan 21 10:11:32 crc kubenswrapper[4684]: E0121 10:11:32.492294 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2\": container with ID starting with c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2 not found: ID does not exist" containerID="c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.492414 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2"} err="failed to get container status \"c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2\": rpc error: code = NotFound desc = could not find container \"c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2\": container with ID starting with c95ea544d21eb96a17757ebdea5d09cd4b9219952e350e37a6192c7b310012c2 not found: ID does not exist" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.493726 4684 scope.go:117] "RemoveContainer" containerID="2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.519155 4684 scope.go:117] "RemoveContainer" containerID="2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796" Jan 21 10:11:32 crc kubenswrapper[4684]: E0121 10:11:32.519760 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796\": container with ID starting with 2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796 not found: ID does not exist" containerID="2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.519834 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796"} err="failed to get container status \"2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796\": rpc error: code = NotFound desc = could not find container \"2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796\": container with ID starting with 2af020b2920c0c7ed8144ab3a7c1d4cecc14b3be9d787aaaf71e28e9b8d50796 not found: ID does not exist" Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.532000 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jmrsp"] Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.532051 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jmrsp"] Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.540300 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf"] Jan 21 10:11:32 crc kubenswrapper[4684]: I0121 10:11:32.545232 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-k9lpf"] Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.317799 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8"] Jan 21 10:11:33 crc kubenswrapper[4684]: E0121 10:11:33.318613 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ebaa53-a616-4d3f-a69f-19da113978c3" containerName="controller-manager" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.318639 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ebaa53-a616-4d3f-a69f-19da113978c3" containerName="controller-manager" Jan 21 10:11:33 crc kubenswrapper[4684]: E0121 10:11:33.318654 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.318664 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 10:11:33 crc kubenswrapper[4684]: E0121 10:11:33.318694 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" containerName="route-controller-manager" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.318704 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" containerName="route-controller-manager" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.318861 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ebaa53-a616-4d3f-a69f-19da113978c3" containerName="controller-manager" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.318890 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.318902 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" containerName="route-controller-manager" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.319556 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.322450 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.322877 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.322960 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.323561 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.324452 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-tv82w"] Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.324483 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.325435 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.325762 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.328609 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.329949 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8"] Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.330695 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.330953 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.331195 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.332544 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.333140 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.345679 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-tv82w"] Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.347920 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.395695 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-client-ca\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.396254 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7zrn\" (UniqueName: \"kubernetes.io/projected/c4d2bf04-b691-4acb-b40f-7361df1721f7-kube-api-access-n7zrn\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.396518 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-config\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.396617 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d2bf04-b691-4acb-b40f-7361df1721f7-serving-cert\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.497780 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7zrn\" (UniqueName: \"kubernetes.io/projected/c4d2bf04-b691-4acb-b40f-7361df1721f7-kube-api-access-n7zrn\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.497890 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-config\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.497954 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-serving-cert\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.497989 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-client-ca\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.498028 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d2bf04-b691-4acb-b40f-7361df1721f7-serving-cert\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.498083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-config\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.498122 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-client-ca\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.498455 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rp59\" (UniqueName: \"kubernetes.io/projected/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-kube-api-access-7rp59\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.498543 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-proxy-ca-bundles\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.499658 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-config\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.499886 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-client-ca\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.507067 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d2bf04-b691-4acb-b40f-7361df1721f7-serving-cert\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.523261 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7zrn\" (UniqueName: \"kubernetes.io/projected/c4d2bf04-b691-4acb-b40f-7361df1721f7-kube-api-access-n7zrn\") pod \"route-controller-manager-f8c564845-b42f8\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.600602 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-config\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.601214 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rp59\" (UniqueName: \"kubernetes.io/projected/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-kube-api-access-7rp59\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.601561 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-proxy-ca-bundles\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.601858 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-client-ca\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.602012 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-serving-cert\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.602313 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-config\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.603894 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-client-ca\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.604237 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-proxy-ca-bundles\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.614070 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-serving-cert\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.625399 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rp59\" (UniqueName: \"kubernetes.io/projected/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-kube-api-access-7rp59\") pod \"controller-manager-54467446f7-tv82w\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.639099 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.652208 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.892068 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8"] Jan 21 10:11:33 crc kubenswrapper[4684]: I0121 10:11:33.948011 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-tv82w"] Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.494079 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" event={"ID":"cf88b4f5-7d08-4c52-b4bd-948849eba8dc","Type":"ContainerStarted","Data":"9e8973a5e5593549300b70655015d11875acb90fa5ac62330dc759b5045c39dc"} Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.494783 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.494809 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" event={"ID":"cf88b4f5-7d08-4c52-b4bd-948849eba8dc","Type":"ContainerStarted","Data":"c608e332e6d6d2b82cbb0420a594d020481ce8216e7df4374d35342fe47725b5"} Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.499713 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" event={"ID":"c4d2bf04-b691-4acb-b40f-7361df1721f7","Type":"ContainerStarted","Data":"a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185"} Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.499769 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" event={"ID":"c4d2bf04-b691-4acb-b40f-7361df1721f7","Type":"ContainerStarted","Data":"09826162260f64f9aebd1fcdfbefdd5bcdc7916ad14ede1e4008fae2723c80f2"} Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.499954 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.501963 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.531250 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b70a09c-424c-4317-a719-e0dbb6eefe1b" path="/var/lib/kubelet/pods/5b70a09c-424c-4317-a719-e0dbb6eefe1b/volumes" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.532206 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ebaa53-a616-4d3f-a69f-19da113978c3" path="/var/lib/kubelet/pods/e9ebaa53-a616-4d3f-a69f-19da113978c3/volumes" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.550644 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" podStartSLOduration=3.550613677 podStartE2EDuration="3.550613677s" podCreationTimestamp="2026-01-21 10:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:11:34.531863842 +0000 UTC m=+332.289946839" watchObservedRunningTime="2026-01-21 10:11:34.550613677 +0000 UTC m=+332.308696644" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.570138 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" podStartSLOduration=3.570108934 podStartE2EDuration="3.570108934s" podCreationTimestamp="2026-01-21 10:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:11:34.563746816 +0000 UTC m=+332.321829803" watchObservedRunningTime="2026-01-21 10:11:34.570108934 +0000 UTC m=+332.328191901" Jan 21 10:11:34 crc kubenswrapper[4684]: I0121 10:11:34.738598 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:51 crc kubenswrapper[4684]: I0121 10:11:51.535624 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8"] Jan 21 10:11:51 crc kubenswrapper[4684]: I0121 10:11:51.536762 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" podUID="c4d2bf04-b691-4acb-b40f-7361df1721f7" containerName="route-controller-manager" containerID="cri-o://a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185" gracePeriod=30 Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.027453 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.203309 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-client-ca\") pod \"c4d2bf04-b691-4acb-b40f-7361df1721f7\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.203420 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-config\") pod \"c4d2bf04-b691-4acb-b40f-7361df1721f7\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.203473 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7zrn\" (UniqueName: \"kubernetes.io/projected/c4d2bf04-b691-4acb-b40f-7361df1721f7-kube-api-access-n7zrn\") pod \"c4d2bf04-b691-4acb-b40f-7361df1721f7\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.203518 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d2bf04-b691-4acb-b40f-7361df1721f7-serving-cert\") pod \"c4d2bf04-b691-4acb-b40f-7361df1721f7\" (UID: \"c4d2bf04-b691-4acb-b40f-7361df1721f7\") " Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.205147 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "c4d2bf04-b691-4acb-b40f-7361df1721f7" (UID: "c4d2bf04-b691-4acb-b40f-7361df1721f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.205211 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-config" (OuterVolumeSpecName: "config") pod "c4d2bf04-b691-4acb-b40f-7361df1721f7" (UID: "c4d2bf04-b691-4acb-b40f-7361df1721f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.211680 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d2bf04-b691-4acb-b40f-7361df1721f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c4d2bf04-b691-4acb-b40f-7361df1721f7" (UID: "c4d2bf04-b691-4acb-b40f-7361df1721f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.211731 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d2bf04-b691-4acb-b40f-7361df1721f7-kube-api-access-n7zrn" (OuterVolumeSpecName: "kube-api-access-n7zrn") pod "c4d2bf04-b691-4acb-b40f-7361df1721f7" (UID: "c4d2bf04-b691-4acb-b40f-7361df1721f7"). InnerVolumeSpecName "kube-api-access-n7zrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.305619 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.305680 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d2bf04-b691-4acb-b40f-7361df1721f7-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.305703 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7zrn\" (UniqueName: \"kubernetes.io/projected/c4d2bf04-b691-4acb-b40f-7361df1721f7-kube-api-access-n7zrn\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.305723 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4d2bf04-b691-4acb-b40f-7361df1721f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.618610 4684 generic.go:334] "Generic (PLEG): container finished" podID="c4d2bf04-b691-4acb-b40f-7361df1721f7" containerID="a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185" exitCode=0 Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.618708 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" event={"ID":"c4d2bf04-b691-4acb-b40f-7361df1721f7","Type":"ContainerDied","Data":"a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185"} Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.618758 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" event={"ID":"c4d2bf04-b691-4acb-b40f-7361df1721f7","Type":"ContainerDied","Data":"09826162260f64f9aebd1fcdfbefdd5bcdc7916ad14ede1e4008fae2723c80f2"} Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.618780 4684 scope.go:117] "RemoveContainer" containerID="a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.620657 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.648173 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8"] Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.651203 4684 scope.go:117] "RemoveContainer" containerID="a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185" Jan 21 10:11:52 crc kubenswrapper[4684]: E0121 10:11:52.653408 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185\": container with ID starting with a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185 not found: ID does not exist" containerID="a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.653476 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185"} err="failed to get container status \"a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185\": rpc error: code = NotFound desc = could not find container \"a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185\": container with ID starting with a5c3c1a8e0b285dbf6ff938d8bac13acf7fd940e4fb76b881af900fa34b04185 not found: ID does not exist" Jan 21 10:11:52 crc kubenswrapper[4684]: I0121 10:11:52.664509 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f8c564845-b42f8"] Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.331022 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv"] Jan 21 10:11:53 crc kubenswrapper[4684]: E0121 10:11:53.331413 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d2bf04-b691-4acb-b40f-7361df1721f7" containerName="route-controller-manager" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.331442 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d2bf04-b691-4acb-b40f-7361df1721f7" containerName="route-controller-manager" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.331707 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d2bf04-b691-4acb-b40f-7361df1721f7" containerName="route-controller-manager" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.332308 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.335905 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.336110 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.336379 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.336507 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.336642 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.337301 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.344423 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv"] Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.523468 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296b96a7-25e7-453e-bb53-179124ee274a-config\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.523525 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296b96a7-25e7-453e-bb53-179124ee274a-serving-cert\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.523551 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296b96a7-25e7-453e-bb53-179124ee274a-client-ca\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.523583 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jn2\" (UniqueName: \"kubernetes.io/projected/296b96a7-25e7-453e-bb53-179124ee274a-kube-api-access-k4jn2\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.625008 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296b96a7-25e7-453e-bb53-179124ee274a-config\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.625143 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296b96a7-25e7-453e-bb53-179124ee274a-serving-cert\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.625216 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296b96a7-25e7-453e-bb53-179124ee274a-client-ca\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.625299 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4jn2\" (UniqueName: \"kubernetes.io/projected/296b96a7-25e7-453e-bb53-179124ee274a-kube-api-access-k4jn2\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.626708 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/296b96a7-25e7-453e-bb53-179124ee274a-config\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.626728 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/296b96a7-25e7-453e-bb53-179124ee274a-client-ca\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.633355 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/296b96a7-25e7-453e-bb53-179124ee274a-serving-cert\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.664283 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4jn2\" (UniqueName: \"kubernetes.io/projected/296b96a7-25e7-453e-bb53-179124ee274a-kube-api-access-k4jn2\") pod \"route-controller-manager-f4446cbf-4h8jv\" (UID: \"296b96a7-25e7-453e-bb53-179124ee274a\") " pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:53 crc kubenswrapper[4684]: I0121 10:11:53.667339 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:54 crc kubenswrapper[4684]: I0121 10:11:54.149122 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv"] Jan 21 10:11:54 crc kubenswrapper[4684]: W0121 10:11:54.158913 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod296b96a7_25e7_453e_bb53_179124ee274a.slice/crio-e705d2c7631217d8df288a15847b19b7a10ccf4780bc99cb1d84c052dd9584ee WatchSource:0}: Error finding container e705d2c7631217d8df288a15847b19b7a10ccf4780bc99cb1d84c052dd9584ee: Status 404 returned error can't find the container with id e705d2c7631217d8df288a15847b19b7a10ccf4780bc99cb1d84c052dd9584ee Jan 21 10:11:54 crc kubenswrapper[4684]: I0121 10:11:54.523225 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d2bf04-b691-4acb-b40f-7361df1721f7" path="/var/lib/kubelet/pods/c4d2bf04-b691-4acb-b40f-7361df1721f7/volumes" Jan 21 10:11:54 crc kubenswrapper[4684]: I0121 10:11:54.635114 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" event={"ID":"296b96a7-25e7-453e-bb53-179124ee274a","Type":"ContainerStarted","Data":"6011d0b5820ca056c1c878fb3609a9d7c4574e600173088ee8d157134a63f495"} Jan 21 10:11:54 crc kubenswrapper[4684]: I0121 10:11:54.635173 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" event={"ID":"296b96a7-25e7-453e-bb53-179124ee274a","Type":"ContainerStarted","Data":"e705d2c7631217d8df288a15847b19b7a10ccf4780bc99cb1d84c052dd9584ee"} Jan 21 10:11:54 crc kubenswrapper[4684]: I0121 10:11:54.636448 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:11:54 crc kubenswrapper[4684]: I0121 10:11:54.660394 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" podStartSLOduration=3.66035393 podStartE2EDuration="3.66035393s" podCreationTimestamp="2026-01-21 10:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:11:54.657572968 +0000 UTC m=+352.415655965" watchObservedRunningTime="2026-01-21 10:11:54.66035393 +0000 UTC m=+352.418436897" Jan 21 10:11:55 crc kubenswrapper[4684]: I0121 10:11:55.065748 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f4446cbf-4h8jv" Jan 21 10:12:07 crc kubenswrapper[4684]: I0121 10:12:07.302853 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:12:07 crc kubenswrapper[4684]: I0121 10:12:07.303785 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.822499 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6xpfr"] Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.824274 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.836319 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6xpfr"] Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.935854 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.935934 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-registry-certificates\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.935979 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-registry-tls\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.936004 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-trusted-ca\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.936105 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-bound-sa-token\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.936161 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrcm\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-kube-api-access-plrcm\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.936195 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.936232 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:08 crc kubenswrapper[4684]: I0121 10:12:08.960194 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037559 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-bound-sa-token\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037639 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plrcm\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-kube-api-access-plrcm\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037673 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037709 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037751 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-registry-certificates\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037786 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-registry-tls\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.037805 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-trusted-ca\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.038842 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.039448 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-registry-certificates\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.039626 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-trusted-ca\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.045944 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.046010 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-registry-tls\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.053897 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-bound-sa-token\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.057277 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plrcm\" (UniqueName: \"kubernetes.io/projected/0fdf7054-f5dd-4963-a9d4-8a843e3f9e23-kube-api-access-plrcm\") pod \"image-registry-66df7c8f76-6xpfr\" (UID: \"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23\") " pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.156962 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.644627 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6xpfr"] Jan 21 10:12:09 crc kubenswrapper[4684]: I0121 10:12:09.729711 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" event={"ID":"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23","Type":"ContainerStarted","Data":"daa20a74dc0dc25fb118dcdb35ebcd8ac819efd3d12e2aa39fcc4fd3ee6a0cc6"} Jan 21 10:12:10 crc kubenswrapper[4684]: I0121 10:12:10.738375 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" event={"ID":"0fdf7054-f5dd-4963-a9d4-8a843e3f9e23","Type":"ContainerStarted","Data":"eb29365102ce31faa3128ca48e026ab4cabc2b66793a8c73d3a42678950b320f"} Jan 21 10:12:10 crc kubenswrapper[4684]: I0121 10:12:10.738854 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:10 crc kubenswrapper[4684]: I0121 10:12:10.769413 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" podStartSLOduration=2.769383555 podStartE2EDuration="2.769383555s" podCreationTimestamp="2026-01-21 10:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:12:10.765448829 +0000 UTC m=+368.523531836" watchObservedRunningTime="2026-01-21 10:12:10.769383555 +0000 UTC m=+368.527466532" Jan 21 10:12:29 crc kubenswrapper[4684]: I0121 10:12:29.164762 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6xpfr" Jan 21 10:12:29 crc kubenswrapper[4684]: I0121 10:12:29.247174 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8k2vm"] Jan 21 10:12:31 crc kubenswrapper[4684]: I0121 10:12:31.537296 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-tv82w"] Jan 21 10:12:31 crc kubenswrapper[4684]: I0121 10:12:31.538246 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" podUID="cf88b4f5-7d08-4c52-b4bd-948849eba8dc" containerName="controller-manager" containerID="cri-o://9e8973a5e5593549300b70655015d11875acb90fa5ac62330dc759b5045c39dc" gracePeriod=30 Jan 21 10:12:31 crc kubenswrapper[4684]: I0121 10:12:31.871896 4684 generic.go:334] "Generic (PLEG): container finished" podID="cf88b4f5-7d08-4c52-b4bd-948849eba8dc" containerID="9e8973a5e5593549300b70655015d11875acb90fa5ac62330dc759b5045c39dc" exitCode=0 Jan 21 10:12:31 crc kubenswrapper[4684]: I0121 10:12:31.872119 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" event={"ID":"cf88b4f5-7d08-4c52-b4bd-948849eba8dc","Type":"ContainerDied","Data":"9e8973a5e5593549300b70655015d11875acb90fa5ac62330dc759b5045c39dc"} Jan 21 10:12:31 crc kubenswrapper[4684]: I0121 10:12:31.990642 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.162524 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-config\") pod \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.162583 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-proxy-ca-bundles\") pod \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.162607 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rp59\" (UniqueName: \"kubernetes.io/projected/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-kube-api-access-7rp59\") pod \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.162636 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-client-ca\") pod \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.162664 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-serving-cert\") pod \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\" (UID: \"cf88b4f5-7d08-4c52-b4bd-948849eba8dc\") " Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.163918 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf88b4f5-7d08-4c52-b4bd-948849eba8dc" (UID: "cf88b4f5-7d08-4c52-b4bd-948849eba8dc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.163937 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cf88b4f5-7d08-4c52-b4bd-948849eba8dc" (UID: "cf88b4f5-7d08-4c52-b4bd-948849eba8dc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.163967 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-config" (OuterVolumeSpecName: "config") pod "cf88b4f5-7d08-4c52-b4bd-948849eba8dc" (UID: "cf88b4f5-7d08-4c52-b4bd-948849eba8dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.164730 4684 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.164756 4684 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.164772 4684 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.170379 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf88b4f5-7d08-4c52-b4bd-948849eba8dc" (UID: "cf88b4f5-7d08-4c52-b4bd-948849eba8dc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.170685 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-kube-api-access-7rp59" (OuterVolumeSpecName: "kube-api-access-7rp59") pod "cf88b4f5-7d08-4c52-b4bd-948849eba8dc" (UID: "cf88b4f5-7d08-4c52-b4bd-948849eba8dc"). InnerVolumeSpecName "kube-api-access-7rp59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.266313 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rp59\" (UniqueName: \"kubernetes.io/projected/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-kube-api-access-7rp59\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.266763 4684 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf88b4f5-7d08-4c52-b4bd-948849eba8dc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.882443 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" event={"ID":"cf88b4f5-7d08-4c52-b4bd-948849eba8dc","Type":"ContainerDied","Data":"c608e332e6d6d2b82cbb0420a594d020481ce8216e7df4374d35342fe47725b5"} Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.882645 4684 scope.go:117] "RemoveContainer" containerID="9e8973a5e5593549300b70655015d11875acb90fa5ac62330dc759b5045c39dc" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.882712 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-54467446f7-tv82w" Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.918544 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-tv82w"] Jan 21 10:12:32 crc kubenswrapper[4684]: I0121 10:12:32.923387 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-54467446f7-tv82w"] Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.355133 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cb89bdff4-zbz7r"] Jan 21 10:12:33 crc kubenswrapper[4684]: E0121 10:12:33.355781 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf88b4f5-7d08-4c52-b4bd-948849eba8dc" containerName="controller-manager" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.355799 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf88b4f5-7d08-4c52-b4bd-948849eba8dc" containerName="controller-manager" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.355995 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf88b4f5-7d08-4c52-b4bd-948849eba8dc" containerName="controller-manager" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.356417 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.359344 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.359562 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.359615 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.359778 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.362398 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.364995 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.370616 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cb89bdff4-zbz7r"] Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.370845 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.487505 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-client-ca\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.487569 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4a90cc-c026-4d72-9dc2-7152968985d2-serving-cert\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.487597 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-config\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.487657 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-proxy-ca-bundles\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.487678 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tpr\" (UniqueName: \"kubernetes.io/projected/ea4a90cc-c026-4d72-9dc2-7152968985d2-kube-api-access-f5tpr\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.588663 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-client-ca\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.588715 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4a90cc-c026-4d72-9dc2-7152968985d2-serving-cert\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.588739 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-config\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.588784 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-proxy-ca-bundles\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.588807 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tpr\" (UniqueName: \"kubernetes.io/projected/ea4a90cc-c026-4d72-9dc2-7152968985d2-kube-api-access-f5tpr\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.590395 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-client-ca\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.590806 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-config\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.593399 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ea4a90cc-c026-4d72-9dc2-7152968985d2-proxy-ca-bundles\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.599275 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4a90cc-c026-4d72-9dc2-7152968985d2-serving-cert\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.614583 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tpr\" (UniqueName: \"kubernetes.io/projected/ea4a90cc-c026-4d72-9dc2-7152968985d2-kube-api-access-f5tpr\") pod \"controller-manager-cb89bdff4-zbz7r\" (UID: \"ea4a90cc-c026-4d72-9dc2-7152968985d2\") " pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.682558 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:33 crc kubenswrapper[4684]: I0121 10:12:33.919997 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cb89bdff4-zbz7r"] Jan 21 10:12:34 crc kubenswrapper[4684]: I0121 10:12:34.522187 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf88b4f5-7d08-4c52-b4bd-948849eba8dc" path="/var/lib/kubelet/pods/cf88b4f5-7d08-4c52-b4bd-948849eba8dc/volumes" Jan 21 10:12:34 crc kubenswrapper[4684]: I0121 10:12:34.911779 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" event={"ID":"ea4a90cc-c026-4d72-9dc2-7152968985d2","Type":"ContainerStarted","Data":"a6564bae8c9fce8e274d7bf6cb3c0aaf4afa1e8d796bf49f53aa8df3f4300025"} Jan 21 10:12:34 crc kubenswrapper[4684]: I0121 10:12:34.912253 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:34 crc kubenswrapper[4684]: I0121 10:12:34.912267 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" event={"ID":"ea4a90cc-c026-4d72-9dc2-7152968985d2","Type":"ContainerStarted","Data":"1e2db8c0a137a11bbcabb3ff10269d29dc190f80446a7209f0abfc664e8b311f"} Jan 21 10:12:34 crc kubenswrapper[4684]: I0121 10:12:34.917204 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" Jan 21 10:12:34 crc kubenswrapper[4684]: I0121 10:12:34.931157 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cb89bdff4-zbz7r" podStartSLOduration=3.931130691 podStartE2EDuration="3.931130691s" podCreationTimestamp="2026-01-21 10:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:12:34.930020103 +0000 UTC m=+392.688103130" watchObservedRunningTime="2026-01-21 10:12:34.931130691 +0000 UTC m=+392.689213658" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.241971 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnf2q"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.250520 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cnf2q" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="registry-server" containerID="cri-o://11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b" gracePeriod=30 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.250975 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q67n"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.261472 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wfjd"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.267504 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs98q"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.267791 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gs98q" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="registry-server" containerID="cri-o://c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b" gracePeriod=30 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.261740 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" containerID="cri-o://faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc" gracePeriod=30 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.276677 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2q67n" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="registry-server" containerID="cri-o://5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31" gracePeriod=30 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.278186 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqrkg"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.278466 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqrkg" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="registry-server" containerID="cri-o://d74931c1ff2881336cf1b55fc427105f721679cf38dd2768e7066ceb79091edf" gracePeriod=30 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.293954 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpqww"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.295109 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.358278 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpqww"] Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.428952 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc5222c-6074-4cbe-b442-7b41ed0ed363-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.429034 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh24c\" (UniqueName: \"kubernetes.io/projected/8bc5222c-6074-4cbe-b442-7b41ed0ed363-kube-api-access-jh24c\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.429089 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc5222c-6074-4cbe-b442-7b41ed0ed363-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.530250 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc5222c-6074-4cbe-b442-7b41ed0ed363-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.530310 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh24c\" (UniqueName: \"kubernetes.io/projected/8bc5222c-6074-4cbe-b442-7b41ed0ed363-kube-api-access-jh24c\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.530361 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc5222c-6074-4cbe-b442-7b41ed0ed363-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.531909 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc5222c-6074-4cbe-b442-7b41ed0ed363-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.544994 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc5222c-6074-4cbe-b442-7b41ed0ed363-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.550416 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh24c\" (UniqueName: \"kubernetes.io/projected/8bc5222c-6074-4cbe-b442-7b41ed0ed363-kube-api-access-jh24c\") pod \"marketplace-operator-79b997595-fpqww\" (UID: \"8bc5222c-6074-4cbe-b442-7b41ed0ed363\") " pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.672608 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.716498 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.834291 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgb2d\" (UniqueName: \"kubernetes.io/projected/0e22e72a-79a2-45ea-8093-17f56c2f1748-kube-api-access-dgb2d\") pod \"0e22e72a-79a2-45ea-8093-17f56c2f1748\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.834362 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-catalog-content\") pod \"0e22e72a-79a2-45ea-8093-17f56c2f1748\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.834499 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-utilities\") pod \"0e22e72a-79a2-45ea-8093-17f56c2f1748\" (UID: \"0e22e72a-79a2-45ea-8093-17f56c2f1748\") " Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.835908 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-utilities" (OuterVolumeSpecName: "utilities") pod "0e22e72a-79a2-45ea-8093-17f56c2f1748" (UID: "0e22e72a-79a2-45ea-8093-17f56c2f1748"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.840603 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e22e72a-79a2-45ea-8093-17f56c2f1748-kube-api-access-dgb2d" (OuterVolumeSpecName: "kube-api-access-dgb2d") pod "0e22e72a-79a2-45ea-8093-17f56c2f1748" (UID: "0e22e72a-79a2-45ea-8093-17f56c2f1748"). InnerVolumeSpecName "kube-api-access-dgb2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.869218 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.875989 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.880789 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e22e72a-79a2-45ea-8093-17f56c2f1748" (UID: "0e22e72a-79a2-45ea-8093-17f56c2f1748"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.916847 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.935784 4684 generic.go:334] "Generic (PLEG): container finished" podID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerID="11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b" exitCode=0 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.935903 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerDied","Data":"11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.935947 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cnf2q" event={"ID":"2b89446d-5402-4130-b07f-cdd46b6e3d5d","Type":"ContainerDied","Data":"d05d9292c3448fde8425c255ac1621347c9256fddcd1e5395c178b0b82c0f291"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.935978 4684 scope.go:117] "RemoveContainer" containerID="11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.936161 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cnf2q" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.936631 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgb2d\" (UniqueName: \"kubernetes.io/projected/0e22e72a-79a2-45ea-8093-17f56c2f1748-kube-api-access-dgb2d\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.936655 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.936665 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e22e72a-79a2-45ea-8093-17f56c2f1748-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.942872 4684 generic.go:334] "Generic (PLEG): container finished" podID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerID="faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc" exitCode=0 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.943025 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.943079 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" event={"ID":"603ee3d6-73a1-4796-a857-84f9c889b3af","Type":"ContainerDied","Data":"faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.943169 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6wfjd" event={"ID":"603ee3d6-73a1-4796-a857-84f9c889b3af","Type":"ContainerDied","Data":"1fedff62e2ca96a046c5e0f2649198f0bfaaa6171d161e0c6de7c32e99dd019c"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.955578 4684 generic.go:334] "Generic (PLEG): container finished" podID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerID="c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b" exitCode=0 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.955745 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gs98q" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.955772 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerDied","Data":"c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.955843 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gs98q" event={"ID":"0e22e72a-79a2-45ea-8093-17f56c2f1748","Type":"ContainerDied","Data":"ab3c3b01aef550018189716130c7b923a65445e00a9bd8b0aa575e0b09eede59"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.972136 4684 generic.go:334] "Generic (PLEG): container finished" podID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerID="d74931c1ff2881336cf1b55fc427105f721679cf38dd2768e7066ceb79091edf" exitCode=0 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.972419 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerDied","Data":"d74931c1ff2881336cf1b55fc427105f721679cf38dd2768e7066ceb79091edf"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.974040 4684 scope.go:117] "RemoveContainer" containerID="82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.984085 4684 generic.go:334] "Generic (PLEG): container finished" podID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerID="5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31" exitCode=0 Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.984936 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerDied","Data":"5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.984970 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q67n" event={"ID":"e08a2ff1-2079-4571-9c86-3167ce7e20d6","Type":"ContainerDied","Data":"1c85c1b60aed8813dd26999753ccc7eed2a687aae7c308f1fb3c7df6492d969d"} Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.984981 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q67n" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.995340 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:12:36 crc kubenswrapper[4684]: I0121 10:12:36.998653 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs98q"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.010079 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gs98q"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.014652 4684 scope.go:117] "RemoveContainer" containerID="8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.037680 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-operator-metrics\") pod \"603ee3d6-73a1-4796-a857-84f9c889b3af\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.037736 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2zj\" (UniqueName: \"kubernetes.io/projected/603ee3d6-73a1-4796-a857-84f9c889b3af-kube-api-access-bc2zj\") pod \"603ee3d6-73a1-4796-a857-84f9c889b3af\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.037843 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzr4j\" (UniqueName: \"kubernetes.io/projected/2b89446d-5402-4130-b07f-cdd46b6e3d5d-kube-api-access-zzr4j\") pod \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.037906 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-utilities\") pod \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.037928 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-catalog-content\") pod \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.037961 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lbtv\" (UniqueName: \"kubernetes.io/projected/e08a2ff1-2079-4571-9c86-3167ce7e20d6-kube-api-access-4lbtv\") pod \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.038087 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-trusted-ca\") pod \"603ee3d6-73a1-4796-a857-84f9c889b3af\" (UID: \"603ee3d6-73a1-4796-a857-84f9c889b3af\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.038140 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-utilities\") pod \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\" (UID: \"2b89446d-5402-4130-b07f-cdd46b6e3d5d\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.038180 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-catalog-content\") pod \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\" (UID: \"e08a2ff1-2079-4571-9c86-3167ce7e20d6\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.041698 4684 scope.go:117] "RemoveContainer" containerID="11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.042509 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "603ee3d6-73a1-4796-a857-84f9c889b3af" (UID: "603ee3d6-73a1-4796-a857-84f9c889b3af"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.046053 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-utilities" (OuterVolumeSpecName: "utilities") pod "2b89446d-5402-4130-b07f-cdd46b6e3d5d" (UID: "2b89446d-5402-4130-b07f-cdd46b6e3d5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.046273 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e08a2ff1-2079-4571-9c86-3167ce7e20d6-kube-api-access-4lbtv" (OuterVolumeSpecName: "kube-api-access-4lbtv") pod "e08a2ff1-2079-4571-9c86-3167ce7e20d6" (UID: "e08a2ff1-2079-4571-9c86-3167ce7e20d6"). InnerVolumeSpecName "kube-api-access-4lbtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.049788 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-utilities" (OuterVolumeSpecName: "utilities") pod "e08a2ff1-2079-4571-9c86-3167ce7e20d6" (UID: "e08a2ff1-2079-4571-9c86-3167ce7e20d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.049962 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b\": container with ID starting with 11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b not found: ID does not exist" containerID="11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.050008 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b"} err="failed to get container status \"11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b\": rpc error: code = NotFound desc = could not find container \"11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b\": container with ID starting with 11be68f825a04b7f3db083ee34acd247538f1ad6937ae554c9483eda71797d1b not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.050050 4684 scope.go:117] "RemoveContainer" containerID="82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.049966 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b89446d-5402-4130-b07f-cdd46b6e3d5d-kube-api-access-zzr4j" (OuterVolumeSpecName: "kube-api-access-zzr4j") pod "2b89446d-5402-4130-b07f-cdd46b6e3d5d" (UID: "2b89446d-5402-4130-b07f-cdd46b6e3d5d"). InnerVolumeSpecName "kube-api-access-zzr4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.050732 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d\": container with ID starting with 82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d not found: ID does not exist" containerID="82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.051339 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d"} err="failed to get container status \"82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d\": rpc error: code = NotFound desc = could not find container \"82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d\": container with ID starting with 82f4fefddc65875c5065817a7d42f4942736cf38007391bbae1a03208f20469d not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.051514 4684 scope.go:117] "RemoveContainer" containerID="8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.052001 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047\": container with ID starting with 8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047 not found: ID does not exist" containerID="8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.052038 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047"} err="failed to get container status \"8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047\": rpc error: code = NotFound desc = could not find container \"8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047\": container with ID starting with 8a8dd92bcc258df18707938ce7b6886dd5fefc7f62bea3349184ae39d9330047 not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.052059 4684 scope.go:117] "RemoveContainer" containerID="faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.059864 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "603ee3d6-73a1-4796-a857-84f9c889b3af" (UID: "603ee3d6-73a1-4796-a857-84f9c889b3af"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.071624 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603ee3d6-73a1-4796-a857-84f9c889b3af-kube-api-access-bc2zj" (OuterVolumeSpecName: "kube-api-access-bc2zj") pod "603ee3d6-73a1-4796-a857-84f9c889b3af" (UID: "603ee3d6-73a1-4796-a857-84f9c889b3af"). InnerVolumeSpecName "kube-api-access-bc2zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.071875 4684 scope.go:117] "RemoveContainer" containerID="5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.095803 4684 scope.go:117] "RemoveContainer" containerID="faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.096839 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc\": container with ID starting with faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc not found: ID does not exist" containerID="faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.096883 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc"} err="failed to get container status \"faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc\": rpc error: code = NotFound desc = could not find container \"faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc\": container with ID starting with faca393790fda69cfde1669c95a027299cb385025001fd137bfd64de9e2ee7bc not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.096910 4684 scope.go:117] "RemoveContainer" containerID="5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.097414 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f\": container with ID starting with 5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f not found: ID does not exist" containerID="5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.097450 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f"} err="failed to get container status \"5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f\": rpc error: code = NotFound desc = could not find container \"5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f\": container with ID starting with 5f3c9aa33cfe8d9e6fda3e7e375849e25b4abe430d473d50a71ddfc53a39249f not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.097473 4684 scope.go:117] "RemoveContainer" containerID="c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.102437 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b89446d-5402-4130-b07f-cdd46b6e3d5d" (UID: "2b89446d-5402-4130-b07f-cdd46b6e3d5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.109179 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e08a2ff1-2079-4571-9c86-3167ce7e20d6" (UID: "e08a2ff1-2079-4571-9c86-3167ce7e20d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.114099 4684 scope.go:117] "RemoveContainer" containerID="e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.136770 4684 scope.go:117] "RemoveContainer" containerID="aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139309 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-catalog-content\") pod \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139368 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5scz\" (UniqueName: \"kubernetes.io/projected/b0b6cdf0-27ef-4701-b0d2-4b877b043253-kube-api-access-t5scz\") pod \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139426 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-utilities\") pod \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\" (UID: \"b0b6cdf0-27ef-4701-b0d2-4b877b043253\") " Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139844 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139864 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139877 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lbtv\" (UniqueName: \"kubernetes.io/projected/e08a2ff1-2079-4571-9c86-3167ce7e20d6-kube-api-access-4lbtv\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139890 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139900 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b89446d-5402-4130-b07f-cdd46b6e3d5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139910 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e08a2ff1-2079-4571-9c86-3167ce7e20d6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139919 4684 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/603ee3d6-73a1-4796-a857-84f9c889b3af-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139929 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2zj\" (UniqueName: \"kubernetes.io/projected/603ee3d6-73a1-4796-a857-84f9c889b3af-kube-api-access-bc2zj\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.139939 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzr4j\" (UniqueName: \"kubernetes.io/projected/2b89446d-5402-4130-b07f-cdd46b6e3d5d-kube-api-access-zzr4j\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.141555 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-utilities" (OuterVolumeSpecName: "utilities") pod "b0b6cdf0-27ef-4701-b0d2-4b877b043253" (UID: "b0b6cdf0-27ef-4701-b0d2-4b877b043253"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.144056 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b6cdf0-27ef-4701-b0d2-4b877b043253-kube-api-access-t5scz" (OuterVolumeSpecName: "kube-api-access-t5scz") pod "b0b6cdf0-27ef-4701-b0d2-4b877b043253" (UID: "b0b6cdf0-27ef-4701-b0d2-4b877b043253"). InnerVolumeSpecName "kube-api-access-t5scz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.164496 4684 scope.go:117] "RemoveContainer" containerID="c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.165659 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b\": container with ID starting with c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b not found: ID does not exist" containerID="c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.165716 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b"} err="failed to get container status \"c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b\": rpc error: code = NotFound desc = could not find container \"c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b\": container with ID starting with c254c78a46bfb0ffcf97995f1c7f556b21394ce9e1d5065667d03b08fcade55b not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.165755 4684 scope.go:117] "RemoveContainer" containerID="e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.166352 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7\": container with ID starting with e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7 not found: ID does not exist" containerID="e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.166409 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7"} err="failed to get container status \"e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7\": rpc error: code = NotFound desc = could not find container \"e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7\": container with ID starting with e4ec020c808ef9cc71ca8f6386e7b45d9a3088020296fad6f49c6e1459ad09a7 not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.166433 4684 scope.go:117] "RemoveContainer" containerID="aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.166809 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071\": container with ID starting with aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071 not found: ID does not exist" containerID="aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.166838 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071"} err="failed to get container status \"aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071\": rpc error: code = NotFound desc = could not find container \"aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071\": container with ID starting with aafbdf570b6238dc16f42510a392e07981fd9edcff85eb6d66938cb2cbab4071 not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.166856 4684 scope.go:117] "RemoveContainer" containerID="5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.186332 4684 scope.go:117] "RemoveContainer" containerID="9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.209562 4684 scope.go:117] "RemoveContainer" containerID="77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.226331 4684 scope.go:117] "RemoveContainer" containerID="5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.226843 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31\": container with ID starting with 5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31 not found: ID does not exist" containerID="5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.226885 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31"} err="failed to get container status \"5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31\": rpc error: code = NotFound desc = could not find container \"5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31\": container with ID starting with 5eb73ba6565e83c8259750262c8b72b47a644bef34badcc22b143ccf6871dc31 not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.226911 4684 scope.go:117] "RemoveContainer" containerID="9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.227484 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e\": container with ID starting with 9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e not found: ID does not exist" containerID="9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.227542 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e"} err="failed to get container status \"9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e\": rpc error: code = NotFound desc = could not find container \"9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e\": container with ID starting with 9ae1edff280e8f0dcafc0d20d8720c07c9eed6c786f24c0a30db79b3f8d17c9e not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.227583 4684 scope.go:117] "RemoveContainer" containerID="77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6" Jan 21 10:12:37 crc kubenswrapper[4684]: E0121 10:12:37.228041 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6\": container with ID starting with 77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6 not found: ID does not exist" containerID="77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.228084 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6"} err="failed to get container status \"77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6\": rpc error: code = NotFound desc = could not find container \"77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6\": container with ID starting with 77e5a96fbd56fd425a9b9963633c351f73d83f9dea9f1c68a518798b0dc4f8d6 not found: ID does not exist" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.243440 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5scz\" (UniqueName: \"kubernetes.io/projected/b0b6cdf0-27ef-4701-b0d2-4b877b043253-kube-api-access-t5scz\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.243491 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.251109 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fpqww"] Jan 21 10:12:37 crc kubenswrapper[4684]: W0121 10:12:37.257804 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bc5222c_6074_4cbe_b442_7b41ed0ed363.slice/crio-4d2ea55204352e4cf0a73a0ee55972a685c1806772d1e30f791a4d10c707a14c WatchSource:0}: Error finding container 4d2ea55204352e4cf0a73a0ee55972a685c1806772d1e30f791a4d10c707a14c: Status 404 returned error can't find the container with id 4d2ea55204352e4cf0a73a0ee55972a685c1806772d1e30f791a4d10c707a14c Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.272977 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cnf2q"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.282901 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cnf2q"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.294415 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wfjd"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.301045 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6wfjd"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.302152 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.302203 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.311070 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0b6cdf0-27ef-4701-b0d2-4b877b043253" (UID: "b0b6cdf0-27ef-4701-b0d2-4b877b043253"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.330828 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q67n"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.335829 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2q67n"] Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.345361 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0b6cdf0-27ef-4701-b0d2-4b877b043253-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.995071 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" event={"ID":"8bc5222c-6074-4cbe-b442-7b41ed0ed363","Type":"ContainerStarted","Data":"3a6179cfc0fda6fcd80ff5059856e61c7011bd3b210ee90954b9386cc1baa641"} Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.995573 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" event={"ID":"8bc5222c-6074-4cbe-b442-7b41ed0ed363","Type":"ContainerStarted","Data":"4d2ea55204352e4cf0a73a0ee55972a685c1806772d1e30f791a4d10c707a14c"} Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.995606 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.999526 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.999768 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqrkg" event={"ID":"b0b6cdf0-27ef-4701-b0d2-4b877b043253","Type":"ContainerDied","Data":"cd6736fd4d48f00b17cf7a1313c0cf1a960f11a7b68e8f9953316c3ee229c61b"} Jan 21 10:12:37 crc kubenswrapper[4684]: I0121 10:12:37.999811 4684 scope.go:117] "RemoveContainer" containerID="d74931c1ff2881336cf1b55fc427105f721679cf38dd2768e7066ceb79091edf" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:37.999938 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqrkg" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.018168 4684 scope.go:117] "RemoveContainer" containerID="eb24a311a16f2c9331a27378219053062995cbc1e001b408a486c28e1c03ed80" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.025561 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fpqww" podStartSLOduration=2.025533614 podStartE2EDuration="2.025533614s" podCreationTimestamp="2026-01-21 10:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:12:38.023384521 +0000 UTC m=+395.781467498" watchObservedRunningTime="2026-01-21 10:12:38.025533614 +0000 UTC m=+395.783616581" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.065466 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqrkg"] Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.069858 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqrkg"] Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.071975 4684 scope.go:117] "RemoveContainer" containerID="3e9690697645ed9cabdf2034ae868b6ea06d52540ff4238b0fe64381e1496285" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.464078 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q79ws"] Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.466311 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.466511 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.466632 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.466734 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.466836 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.466941 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.467045 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.467199 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.467334 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.467485 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.467592 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.467674 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.467791 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.467904 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.468011 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.468115 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.468232 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.468337 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.468486 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.468591 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.468724 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.468828 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="extract-utilities" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.468944 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.469052 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="extract-content" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.469166 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.469293 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: E0121 10:12:38.469432 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.469528 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.469880 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.470037 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.470145 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.470237 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" containerName="marketplace-operator" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.470322 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.470550 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" containerName="registry-server" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.472083 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.477498 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.497682 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79ws"] Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.523610 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e22e72a-79a2-45ea-8093-17f56c2f1748" path="/var/lib/kubelet/pods/0e22e72a-79a2-45ea-8093-17f56c2f1748/volumes" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.524398 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b89446d-5402-4130-b07f-cdd46b6e3d5d" path="/var/lib/kubelet/pods/2b89446d-5402-4130-b07f-cdd46b6e3d5d/volumes" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.525038 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603ee3d6-73a1-4796-a857-84f9c889b3af" path="/var/lib/kubelet/pods/603ee3d6-73a1-4796-a857-84f9c889b3af/volumes" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.526107 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b6cdf0-27ef-4701-b0d2-4b877b043253" path="/var/lib/kubelet/pods/b0b6cdf0-27ef-4701-b0d2-4b877b043253/volumes" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.526976 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e08a2ff1-2079-4571-9c86-3167ce7e20d6" path="/var/lib/kubelet/pods/e08a2ff1-2079-4571-9c86-3167ce7e20d6/volumes" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.567182 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-utilities\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.567347 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-catalog-content\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.567670 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjg9h\" (UniqueName: \"kubernetes.io/projected/f9f898be-f7bb-48e5-b345-cc483a249a54-kube-api-access-wjg9h\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.659510 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mk9s5"] Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.660941 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.670537 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-catalog-content\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.670623 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjg9h\" (UniqueName: \"kubernetes.io/projected/f9f898be-f7bb-48e5-b345-cc483a249a54-kube-api-access-wjg9h\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.670660 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-utilities\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.671118 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-utilities\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.671495 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-catalog-content\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.674087 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mk9s5"] Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.683756 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.708132 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjg9h\" (UniqueName: \"kubernetes.io/projected/f9f898be-f7bb-48e5-b345-cc483a249a54-kube-api-access-wjg9h\") pod \"redhat-marketplace-q79ws\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.771923 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bac8f1-34d3-4f17-853f-a8ccd424baef-catalog-content\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.771968 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bac8f1-34d3-4f17-853f-a8ccd424baef-utilities\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.772136 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqj57\" (UniqueName: \"kubernetes.io/projected/e3bac8f1-34d3-4f17-853f-a8ccd424baef-kube-api-access-xqj57\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.789994 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.873020 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqj57\" (UniqueName: \"kubernetes.io/projected/e3bac8f1-34d3-4f17-853f-a8ccd424baef-kube-api-access-xqj57\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.873096 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bac8f1-34d3-4f17-853f-a8ccd424baef-catalog-content\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.873121 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bac8f1-34d3-4f17-853f-a8ccd424baef-utilities\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.873847 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3bac8f1-34d3-4f17-853f-a8ccd424baef-utilities\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.873896 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3bac8f1-34d3-4f17-853f-a8ccd424baef-catalog-content\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.903209 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqj57\" (UniqueName: \"kubernetes.io/projected/e3bac8f1-34d3-4f17-853f-a8ccd424baef-kube-api-access-xqj57\") pod \"redhat-operators-mk9s5\" (UID: \"e3bac8f1-34d3-4f17-853f-a8ccd424baef\") " pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:38 crc kubenswrapper[4684]: I0121 10:12:38.989038 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:39 crc kubenswrapper[4684]: I0121 10:12:39.238105 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79ws"] Jan 21 10:12:39 crc kubenswrapper[4684]: W0121 10:12:39.244627 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9f898be_f7bb_48e5_b345_cc483a249a54.slice/crio-4c2093aa3dabb1d4bf511a37a59f6a02970d49c934e706982958e3700e4dbfae WatchSource:0}: Error finding container 4c2093aa3dabb1d4bf511a37a59f6a02970d49c934e706982958e3700e4dbfae: Status 404 returned error can't find the container with id 4c2093aa3dabb1d4bf511a37a59f6a02970d49c934e706982958e3700e4dbfae Jan 21 10:12:39 crc kubenswrapper[4684]: I0121 10:12:39.422836 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mk9s5"] Jan 21 10:12:39 crc kubenswrapper[4684]: W0121 10:12:39.429866 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3bac8f1_34d3_4f17_853f_a8ccd424baef.slice/crio-ef437189f07d0b4390693039de2dc8673f37dabc340c49dfccd042e174609be3 WatchSource:0}: Error finding container ef437189f07d0b4390693039de2dc8673f37dabc340c49dfccd042e174609be3: Status 404 returned error can't find the container with id ef437189f07d0b4390693039de2dc8673f37dabc340c49dfccd042e174609be3 Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.028012 4684 generic.go:334] "Generic (PLEG): container finished" podID="e3bac8f1-34d3-4f17-853f-a8ccd424baef" containerID="3135f2c0216b070e6bef48d4e0f72339286ef805f1a440d50919cb3e988196ad" exitCode=0 Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.028103 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mk9s5" event={"ID":"e3bac8f1-34d3-4f17-853f-a8ccd424baef","Type":"ContainerDied","Data":"3135f2c0216b070e6bef48d4e0f72339286ef805f1a440d50919cb3e988196ad"} Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.028209 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mk9s5" event={"ID":"e3bac8f1-34d3-4f17-853f-a8ccd424baef","Type":"ContainerStarted","Data":"ef437189f07d0b4390693039de2dc8673f37dabc340c49dfccd042e174609be3"} Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.031322 4684 generic.go:334] "Generic (PLEG): container finished" podID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerID="c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3" exitCode=0 Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.031441 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerDied","Data":"c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3"} Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.031493 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerStarted","Data":"4c2093aa3dabb1d4bf511a37a59f6a02970d49c934e706982958e3700e4dbfae"} Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.868463 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwnx5"] Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.871903 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.878811 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwnx5"] Jan 21 10:12:40 crc kubenswrapper[4684]: I0121 10:12:40.886676 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.005448 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0f8c13-b2c9-405d-8a57-4a68439bb296-catalog-content\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.005676 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0f8c13-b2c9-405d-8a57-4a68439bb296-utilities\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.005819 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vrt\" (UniqueName: \"kubernetes.io/projected/db0f8c13-b2c9-405d-8a57-4a68439bb296-kube-api-access-52vrt\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.041464 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mk9s5" event={"ID":"e3bac8f1-34d3-4f17-853f-a8ccd424baef","Type":"ContainerStarted","Data":"bcfb58a5aab09ce635640b40cd7526f3c08ca3f1c849efd89fce6eb1e8647281"} Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.044662 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerStarted","Data":"084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61"} Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.062726 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chnqm"] Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.064369 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.068717 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.072096 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chnqm"] Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.107200 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vrt\" (UniqueName: \"kubernetes.io/projected/db0f8c13-b2c9-405d-8a57-4a68439bb296-kube-api-access-52vrt\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.107295 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0f8c13-b2c9-405d-8a57-4a68439bb296-catalog-content\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.107344 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0f8c13-b2c9-405d-8a57-4a68439bb296-utilities\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.108008 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0f8c13-b2c9-405d-8a57-4a68439bb296-utilities\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.108124 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0f8c13-b2c9-405d-8a57-4a68439bb296-catalog-content\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.131605 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vrt\" (UniqueName: \"kubernetes.io/projected/db0f8c13-b2c9-405d-8a57-4a68439bb296-kube-api-access-52vrt\") pod \"community-operators-dwnx5\" (UID: \"db0f8c13-b2c9-405d-8a57-4a68439bb296\") " pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.209218 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnnv\" (UniqueName: \"kubernetes.io/projected/7a860bc7-f569-4865-bba1-65aded1a7dac-kube-api-access-bbnnv\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.210627 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a860bc7-f569-4865-bba1-65aded1a7dac-catalog-content\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.210724 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a860bc7-f569-4865-bba1-65aded1a7dac-utilities\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.249757 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.313699 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a860bc7-f569-4865-bba1-65aded1a7dac-catalog-content\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.313782 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a860bc7-f569-4865-bba1-65aded1a7dac-utilities\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.313884 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbnnv\" (UniqueName: \"kubernetes.io/projected/7a860bc7-f569-4865-bba1-65aded1a7dac-kube-api-access-bbnnv\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.315021 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a860bc7-f569-4865-bba1-65aded1a7dac-catalog-content\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.315150 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a860bc7-f569-4865-bba1-65aded1a7dac-utilities\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.332762 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbnnv\" (UniqueName: \"kubernetes.io/projected/7a860bc7-f569-4865-bba1-65aded1a7dac-kube-api-access-bbnnv\") pod \"certified-operators-chnqm\" (UID: \"7a860bc7-f569-4865-bba1-65aded1a7dac\") " pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.384389 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.681942 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwnx5"] Jan 21 10:12:41 crc kubenswrapper[4684]: W0121 10:12:41.693766 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb0f8c13_b2c9_405d_8a57_4a68439bb296.slice/crio-f645a5d7de5eeff787cbae32e6a97a2e4a562a7803f99f076deec5c8979508ce WatchSource:0}: Error finding container f645a5d7de5eeff787cbae32e6a97a2e4a562a7803f99f076deec5c8979508ce: Status 404 returned error can't find the container with id f645a5d7de5eeff787cbae32e6a97a2e4a562a7803f99f076deec5c8979508ce Jan 21 10:12:41 crc kubenswrapper[4684]: I0121 10:12:41.822585 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chnqm"] Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.054889 4684 generic.go:334] "Generic (PLEG): container finished" podID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerID="084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61" exitCode=0 Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.055250 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerDied","Data":"084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61"} Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.058849 4684 generic.go:334] "Generic (PLEG): container finished" podID="db0f8c13-b2c9-405d-8a57-4a68439bb296" containerID="2fc03ebc2c6f74b0a22832d802335fc39413b552bc31f525a61c6a9f327c5ad6" exitCode=0 Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.058938 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwnx5" event={"ID":"db0f8c13-b2c9-405d-8a57-4a68439bb296","Type":"ContainerDied","Data":"2fc03ebc2c6f74b0a22832d802335fc39413b552bc31f525a61c6a9f327c5ad6"} Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.058977 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwnx5" event={"ID":"db0f8c13-b2c9-405d-8a57-4a68439bb296","Type":"ContainerStarted","Data":"f645a5d7de5eeff787cbae32e6a97a2e4a562a7803f99f076deec5c8979508ce"} Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.062186 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chnqm" event={"ID":"7a860bc7-f569-4865-bba1-65aded1a7dac","Type":"ContainerStarted","Data":"e33ec5430b56165616aaa37e08728d13caccab62d33011af38308647eeadabe5"} Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.063534 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chnqm" event={"ID":"7a860bc7-f569-4865-bba1-65aded1a7dac","Type":"ContainerStarted","Data":"328244c1ac357051c6b627db215a7ef3eb923995ac7b21e40b00b1127b62aa0d"} Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.069619 4684 generic.go:334] "Generic (PLEG): container finished" podID="e3bac8f1-34d3-4f17-853f-a8ccd424baef" containerID="bcfb58a5aab09ce635640b40cd7526f3c08ca3f1c849efd89fce6eb1e8647281" exitCode=0 Jan 21 10:12:42 crc kubenswrapper[4684]: I0121 10:12:42.069692 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mk9s5" event={"ID":"e3bac8f1-34d3-4f17-853f-a8ccd424baef","Type":"ContainerDied","Data":"bcfb58a5aab09ce635640b40cd7526f3c08ca3f1c849efd89fce6eb1e8647281"} Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.078167 4684 generic.go:334] "Generic (PLEG): container finished" podID="7a860bc7-f569-4865-bba1-65aded1a7dac" containerID="e33ec5430b56165616aaa37e08728d13caccab62d33011af38308647eeadabe5" exitCode=0 Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.078298 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chnqm" event={"ID":"7a860bc7-f569-4865-bba1-65aded1a7dac","Type":"ContainerDied","Data":"e33ec5430b56165616aaa37e08728d13caccab62d33011af38308647eeadabe5"} Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.083052 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mk9s5" event={"ID":"e3bac8f1-34d3-4f17-853f-a8ccd424baef","Type":"ContainerStarted","Data":"b7a92bd62c436dda5ae54611c0f0552627a67efb47ad665ce87649a6c3d631a4"} Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.092709 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerStarted","Data":"32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb"} Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.097263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwnx5" event={"ID":"db0f8c13-b2c9-405d-8a57-4a68439bb296","Type":"ContainerStarted","Data":"c65a73013a8c2aca1ff29164fb11e5571dc722df10db50b8c1777c587b0ac043"} Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.155350 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q79ws" podStartSLOduration=2.688862725 podStartE2EDuration="5.155322896s" podCreationTimestamp="2026-01-21 10:12:38 +0000 UTC" firstStartedPulling="2026-01-21 10:12:40.033024102 +0000 UTC m=+397.791107069" lastFinishedPulling="2026-01-21 10:12:42.499484273 +0000 UTC m=+400.257567240" observedRunningTime="2026-01-21 10:12:43.128834571 +0000 UTC m=+400.886917538" watchObservedRunningTime="2026-01-21 10:12:43.155322896 +0000 UTC m=+400.913405863" Jan 21 10:12:43 crc kubenswrapper[4684]: I0121 10:12:43.177613 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mk9s5" podStartSLOduration=2.6850067060000002 podStartE2EDuration="5.177588901s" podCreationTimestamp="2026-01-21 10:12:38 +0000 UTC" firstStartedPulling="2026-01-21 10:12:40.030745135 +0000 UTC m=+397.788828102" lastFinishedPulling="2026-01-21 10:12:42.52332733 +0000 UTC m=+400.281410297" observedRunningTime="2026-01-21 10:12:43.175614695 +0000 UTC m=+400.933697662" watchObservedRunningTime="2026-01-21 10:12:43.177588901 +0000 UTC m=+400.935671868" Jan 21 10:12:44 crc kubenswrapper[4684]: I0121 10:12:44.109864 4684 generic.go:334] "Generic (PLEG): container finished" podID="7a860bc7-f569-4865-bba1-65aded1a7dac" containerID="7f7ad62b27777e1d2c9538e13f834ed967966025c05c5d23c57c1cc5cae8a236" exitCode=0 Jan 21 10:12:44 crc kubenswrapper[4684]: I0121 10:12:44.110037 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chnqm" event={"ID":"7a860bc7-f569-4865-bba1-65aded1a7dac","Type":"ContainerDied","Data":"7f7ad62b27777e1d2c9538e13f834ed967966025c05c5d23c57c1cc5cae8a236"} Jan 21 10:12:44 crc kubenswrapper[4684]: I0121 10:12:44.113465 4684 generic.go:334] "Generic (PLEG): container finished" podID="db0f8c13-b2c9-405d-8a57-4a68439bb296" containerID="c65a73013a8c2aca1ff29164fb11e5571dc722df10db50b8c1777c587b0ac043" exitCode=0 Jan 21 10:12:44 crc kubenswrapper[4684]: I0121 10:12:44.113575 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwnx5" event={"ID":"db0f8c13-b2c9-405d-8a57-4a68439bb296","Type":"ContainerDied","Data":"c65a73013a8c2aca1ff29164fb11e5571dc722df10db50b8c1777c587b0ac043"} Jan 21 10:12:45 crc kubenswrapper[4684]: I0121 10:12:45.121044 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwnx5" event={"ID":"db0f8c13-b2c9-405d-8a57-4a68439bb296","Type":"ContainerStarted","Data":"33e01e28007c314a8599dbe93f09797e4f84f44ae69a354205c2363bc98dc5ac"} Jan 21 10:12:45 crc kubenswrapper[4684]: I0121 10:12:45.124306 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chnqm" event={"ID":"7a860bc7-f569-4865-bba1-65aded1a7dac","Type":"ContainerStarted","Data":"024720bd5a20403d4fcf02424eaef611edc62ca48f9b928284f2b1def108e951"} Jan 21 10:12:45 crc kubenswrapper[4684]: I0121 10:12:45.141528 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwnx5" podStartSLOduration=2.68635758 podStartE2EDuration="5.141510393s" podCreationTimestamp="2026-01-21 10:12:40 +0000 UTC" firstStartedPulling="2026-01-21 10:12:42.064465131 +0000 UTC m=+399.822548098" lastFinishedPulling="2026-01-21 10:12:44.519617954 +0000 UTC m=+402.277700911" observedRunningTime="2026-01-21 10:12:45.138037166 +0000 UTC m=+402.896120133" watchObservedRunningTime="2026-01-21 10:12:45.141510393 +0000 UTC m=+402.899593360" Jan 21 10:12:45 crc kubenswrapper[4684]: I0121 10:12:45.171014 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chnqm" podStartSLOduration=2.649780455 podStartE2EDuration="4.170988908s" podCreationTimestamp="2026-01-21 10:12:41 +0000 UTC" firstStartedPulling="2026-01-21 10:12:43.079922306 +0000 UTC m=+400.838005273" lastFinishedPulling="2026-01-21 10:12:44.601130759 +0000 UTC m=+402.359213726" observedRunningTime="2026-01-21 10:12:45.164649847 +0000 UTC m=+402.922732814" watchObservedRunningTime="2026-01-21 10:12:45.170988908 +0000 UTC m=+402.929071865" Jan 21 10:12:48 crc kubenswrapper[4684]: I0121 10:12:48.790809 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:48 crc kubenswrapper[4684]: I0121 10:12:48.791543 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:48 crc kubenswrapper[4684]: I0121 10:12:48.847487 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:48 crc kubenswrapper[4684]: I0121 10:12:48.989993 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:48 crc kubenswrapper[4684]: I0121 10:12:48.990077 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:49 crc kubenswrapper[4684]: I0121 10:12:49.042857 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:49 crc kubenswrapper[4684]: I0121 10:12:49.192161 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mk9s5" Jan 21 10:12:49 crc kubenswrapper[4684]: I0121 10:12:49.194003 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:12:51 crc kubenswrapper[4684]: I0121 10:12:51.250498 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:51 crc kubenswrapper[4684]: I0121 10:12:51.251658 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:51 crc kubenswrapper[4684]: I0121 10:12:51.307345 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:51 crc kubenswrapper[4684]: I0121 10:12:51.386796 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:51 crc kubenswrapper[4684]: I0121 10:12:51.386892 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:51 crc kubenswrapper[4684]: I0121 10:12:51.426046 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:52 crc kubenswrapper[4684]: I0121 10:12:52.213528 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwnx5" Jan 21 10:12:52 crc kubenswrapper[4684]: I0121 10:12:52.214231 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chnqm" Jan 21 10:12:54 crc kubenswrapper[4684]: I0121 10:12:54.303046 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" podUID="79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" containerName="registry" containerID="cri-o://c056e47f5eca2d60a43f1542b5fd7b483bbd0c5901d8ade48408d6ecacabcd27" gracePeriod=30 Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.192386 4684 generic.go:334] "Generic (PLEG): container finished" podID="79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" containerID="c056e47f5eca2d60a43f1542b5fd7b483bbd0c5901d8ade48408d6ecacabcd27" exitCode=0 Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.192439 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" event={"ID":"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad","Type":"ContainerDied","Data":"c056e47f5eca2d60a43f1542b5fd7b483bbd0c5901d8ade48408d6ecacabcd27"} Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.740277 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883264 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-trusted-ca\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883344 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-bound-sa-token\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883705 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883756 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8jzj\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-kube-api-access-q8jzj\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883816 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-tls\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883906 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-ca-trust-extracted\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883941 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-installation-pull-secrets\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.883989 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-certificates\") pod \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\" (UID: \"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad\") " Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.884751 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.886872 4684 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.887503 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.893636 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.893701 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.894543 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-kube-api-access-q8jzj" (OuterVolumeSpecName: "kube-api-access-q8jzj") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "kube-api-access-q8jzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.901819 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.904619 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.905536 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" (UID: "79b7756e-38ae-4b6c-acc0-b4c0f825b3ad"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.987519 4684 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.987571 4684 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.987590 4684 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.987605 4684 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.987616 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8jzj\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-kube-api-access-q8jzj\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:56 crc kubenswrapper[4684]: I0121 10:12:56.987626 4684 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:57 crc kubenswrapper[4684]: I0121 10:12:57.199968 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" event={"ID":"79b7756e-38ae-4b6c-acc0-b4c0f825b3ad","Type":"ContainerDied","Data":"6871f9ad1f607ff61d102f10265b65e3c44e1dcff0df229e75249b66b27476ad"} Jan 21 10:12:57 crc kubenswrapper[4684]: I0121 10:12:57.200049 4684 scope.go:117] "RemoveContainer" containerID="c056e47f5eca2d60a43f1542b5fd7b483bbd0c5901d8ade48408d6ecacabcd27" Jan 21 10:12:57 crc kubenswrapper[4684]: I0121 10:12:57.200044 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8k2vm" Jan 21 10:12:57 crc kubenswrapper[4684]: I0121 10:12:57.235244 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8k2vm"] Jan 21 10:12:57 crc kubenswrapper[4684]: I0121 10:12:57.240100 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8k2vm"] Jan 21 10:12:58 crc kubenswrapper[4684]: I0121 10:12:58.521928 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" path="/var/lib/kubelet/pods/79b7756e-38ae-4b6c-acc0-b4c0f825b3ad/volumes" Jan 21 10:13:07 crc kubenswrapper[4684]: I0121 10:13:07.303023 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:13:07 crc kubenswrapper[4684]: I0121 10:13:07.303708 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:13:07 crc kubenswrapper[4684]: I0121 10:13:07.303775 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:13:07 crc kubenswrapper[4684]: I0121 10:13:07.305606 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55b5b3c030e129d333cf3f4376ff223084ac8839dda379f9382b41f6c6e1b483"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:13:07 crc kubenswrapper[4684]: I0121 10:13:07.305813 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://55b5b3c030e129d333cf3f4376ff223084ac8839dda379f9382b41f6c6e1b483" gracePeriod=600 Jan 21 10:13:08 crc kubenswrapper[4684]: I0121 10:13:08.275993 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="55b5b3c030e129d333cf3f4376ff223084ac8839dda379f9382b41f6c6e1b483" exitCode=0 Jan 21 10:13:08 crc kubenswrapper[4684]: I0121 10:13:08.276102 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"55b5b3c030e129d333cf3f4376ff223084ac8839dda379f9382b41f6c6e1b483"} Jan 21 10:13:08 crc kubenswrapper[4684]: I0121 10:13:08.276792 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"422c1ec33f05e6485754517d18dd0a73fb82adbe8ba5ea99f708cebf3062fd45"} Jan 21 10:13:08 crc kubenswrapper[4684]: I0121 10:13:08.276838 4684 scope.go:117] "RemoveContainer" containerID="dc88e92afc828edb6e050e6f89243fe1cac78ccf4b300bd1436991b2c11a2335" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.191986 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2"] Jan 21 10:15:00 crc kubenswrapper[4684]: E0121 10:15:00.192904 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" containerName="registry" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.192925 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" containerName="registry" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.193076 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b7756e-38ae-4b6c-acc0-b4c0f825b3ad" containerName="registry" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.193657 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.197441 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.197756 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.202404 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2"] Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.265389 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b300558-3052-4644-9616-86a87a9eacbc-secret-volume\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.265480 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b300558-3052-4644-9616-86a87a9eacbc-config-volume\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.265652 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25tp\" (UniqueName: \"kubernetes.io/projected/8b300558-3052-4644-9616-86a87a9eacbc-kube-api-access-p25tp\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.366623 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b300558-3052-4644-9616-86a87a9eacbc-secret-volume\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.366710 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b300558-3052-4644-9616-86a87a9eacbc-config-volume\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.366835 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25tp\" (UniqueName: \"kubernetes.io/projected/8b300558-3052-4644-9616-86a87a9eacbc-kube-api-access-p25tp\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.368680 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b300558-3052-4644-9616-86a87a9eacbc-config-volume\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.376975 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b300558-3052-4644-9616-86a87a9eacbc-secret-volume\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.396011 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25tp\" (UniqueName: \"kubernetes.io/projected/8b300558-3052-4644-9616-86a87a9eacbc-kube-api-access-p25tp\") pod \"collect-profiles-29483175-fkcv2\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.520246 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.806030 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2"] Jan 21 10:15:00 crc kubenswrapper[4684]: W0121 10:15:00.818518 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b300558_3052_4644_9616_86a87a9eacbc.slice/crio-0ddfa73bc40de42a17b8348edf77209e75852fbc7240d3c23dfbf1683b674cd2 WatchSource:0}: Error finding container 0ddfa73bc40de42a17b8348edf77209e75852fbc7240d3c23dfbf1683b674cd2: Status 404 returned error can't find the container with id 0ddfa73bc40de42a17b8348edf77209e75852fbc7240d3c23dfbf1683b674cd2 Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.996489 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" event={"ID":"8b300558-3052-4644-9616-86a87a9eacbc","Type":"ContainerStarted","Data":"86b2f69531cbe158beb3576156a78673cac28d6252d724c7e49575f70cb7aa32"} Jan 21 10:15:00 crc kubenswrapper[4684]: I0121 10:15:00.996933 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" event={"ID":"8b300558-3052-4644-9616-86a87a9eacbc","Type":"ContainerStarted","Data":"0ddfa73bc40de42a17b8348edf77209e75852fbc7240d3c23dfbf1683b674cd2"} Jan 21 10:15:01 crc kubenswrapper[4684]: I0121 10:15:01.021196 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" podStartSLOduration=1.02116867 podStartE2EDuration="1.02116867s" podCreationTimestamp="2026-01-21 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:15:01.019569842 +0000 UTC m=+538.777652819" watchObservedRunningTime="2026-01-21 10:15:01.02116867 +0000 UTC m=+538.779251677" Jan 21 10:15:02 crc kubenswrapper[4684]: I0121 10:15:02.005350 4684 generic.go:334] "Generic (PLEG): container finished" podID="8b300558-3052-4644-9616-86a87a9eacbc" containerID="86b2f69531cbe158beb3576156a78673cac28d6252d724c7e49575f70cb7aa32" exitCode=0 Jan 21 10:15:02 crc kubenswrapper[4684]: I0121 10:15:02.005449 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" event={"ID":"8b300558-3052-4644-9616-86a87a9eacbc","Type":"ContainerDied","Data":"86b2f69531cbe158beb3576156a78673cac28d6252d724c7e49575f70cb7aa32"} Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.306992 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.413739 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b300558-3052-4644-9616-86a87a9eacbc-config-volume\") pod \"8b300558-3052-4644-9616-86a87a9eacbc\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.413839 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25tp\" (UniqueName: \"kubernetes.io/projected/8b300558-3052-4644-9616-86a87a9eacbc-kube-api-access-p25tp\") pod \"8b300558-3052-4644-9616-86a87a9eacbc\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.413926 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b300558-3052-4644-9616-86a87a9eacbc-secret-volume\") pod \"8b300558-3052-4644-9616-86a87a9eacbc\" (UID: \"8b300558-3052-4644-9616-86a87a9eacbc\") " Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.414624 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b300558-3052-4644-9616-86a87a9eacbc-config-volume" (OuterVolumeSpecName: "config-volume") pod "8b300558-3052-4644-9616-86a87a9eacbc" (UID: "8b300558-3052-4644-9616-86a87a9eacbc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.419340 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300558-3052-4644-9616-86a87a9eacbc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8b300558-3052-4644-9616-86a87a9eacbc" (UID: "8b300558-3052-4644-9616-86a87a9eacbc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.419383 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b300558-3052-4644-9616-86a87a9eacbc-kube-api-access-p25tp" (OuterVolumeSpecName: "kube-api-access-p25tp") pod "8b300558-3052-4644-9616-86a87a9eacbc" (UID: "8b300558-3052-4644-9616-86a87a9eacbc"). InnerVolumeSpecName "kube-api-access-p25tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.515256 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b300558-3052-4644-9616-86a87a9eacbc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.515311 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p25tp\" (UniqueName: \"kubernetes.io/projected/8b300558-3052-4644-9616-86a87a9eacbc-kube-api-access-p25tp\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:03 crc kubenswrapper[4684]: I0121 10:15:03.515337 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8b300558-3052-4644-9616-86a87a9eacbc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:04 crc kubenswrapper[4684]: I0121 10:15:04.020510 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" event={"ID":"8b300558-3052-4644-9616-86a87a9eacbc","Type":"ContainerDied","Data":"0ddfa73bc40de42a17b8348edf77209e75852fbc7240d3c23dfbf1683b674cd2"} Jan 21 10:15:04 crc kubenswrapper[4684]: I0121 10:15:04.020570 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddfa73bc40de42a17b8348edf77209e75852fbc7240d3c23dfbf1683b674cd2" Jan 21 10:15:04 crc kubenswrapper[4684]: I0121 10:15:04.020580 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2" Jan 21 10:15:07 crc kubenswrapper[4684]: I0121 10:15:07.303028 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:15:07 crc kubenswrapper[4684]: I0121 10:15:07.303124 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:15:37 crc kubenswrapper[4684]: I0121 10:15:37.302916 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:15:37 crc kubenswrapper[4684]: I0121 10:15:37.303542 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:16:07 crc kubenswrapper[4684]: I0121 10:16:07.301990 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:16:07 crc kubenswrapper[4684]: I0121 10:16:07.303529 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:16:07 crc kubenswrapper[4684]: I0121 10:16:07.303610 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:16:07 crc kubenswrapper[4684]: I0121 10:16:07.304799 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"422c1ec33f05e6485754517d18dd0a73fb82adbe8ba5ea99f708cebf3062fd45"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:16:07 crc kubenswrapper[4684]: I0121 10:16:07.304913 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://422c1ec33f05e6485754517d18dd0a73fb82adbe8ba5ea99f708cebf3062fd45" gracePeriod=600 Jan 21 10:16:08 crc kubenswrapper[4684]: I0121 10:16:08.420972 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="422c1ec33f05e6485754517d18dd0a73fb82adbe8ba5ea99f708cebf3062fd45" exitCode=0 Jan 21 10:16:08 crc kubenswrapper[4684]: I0121 10:16:08.421052 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"422c1ec33f05e6485754517d18dd0a73fb82adbe8ba5ea99f708cebf3062fd45"} Jan 21 10:16:08 crc kubenswrapper[4684]: I0121 10:16:08.421627 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"be7cd103ac0b509678b75cd5e797eb3c7c476dcd51d6bf722a679b736d58aab7"} Jan 21 10:16:08 crc kubenswrapper[4684]: I0121 10:16:08.421661 4684 scope.go:117] "RemoveContainer" containerID="55b5b3c030e129d333cf3f4376ff223084ac8839dda379f9382b41f6c6e1b483" Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.811752 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6vjwl"] Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812655 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-controller" containerID="cri-o://472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812773 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-node" containerID="cri-o://ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812755 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="nbdb" containerID="cri-o://6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812773 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="northd" containerID="cri-o://0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812754 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812807 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-acl-logging" containerID="cri-o://0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.812916 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="sbdb" containerID="cri-o://24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" gracePeriod=30 Jan 21 10:17:06 crc kubenswrapper[4684]: I0121 10:17:06.854976 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" containerID="cri-o://209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" gracePeriod=30 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.091452 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/3.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.093850 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovn-acl-logging/0.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.094406 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovn-controller/0.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.094925 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133069 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-systemd-units\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133117 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-etc-openvswitch\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133158 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-config\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133187 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-script-lib\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133183 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133204 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133216 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-openvswitch\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133248 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133269 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133302 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-ovn-kubernetes\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133324 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-systemd\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133343 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-netd\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133371 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133389 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-ovn\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133402 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133416 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-node-log\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133430 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133456 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dac888b-051f-405a-8c23-60c205d2aecc-ovn-node-metrics-cert\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133479 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-kubelet\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133507 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-bin\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133539 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-netns\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133559 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-slash\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133581 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-var-lib-openvswitch\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133629 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-log-socket\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133658 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h68cm\" (UniqueName: \"kubernetes.io/projected/8dac888b-051f-405a-8c23-60c205d2aecc-kube-api-access-h68cm\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133679 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-env-overrides\") pod \"8dac888b-051f-405a-8c23-60c205d2aecc\" (UID: \"8dac888b-051f-405a-8c23-60c205d2aecc\") " Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133739 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133766 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133792 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-node-log" (OuterVolumeSpecName: "node-log") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134311 4684 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134333 4684 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134344 4684 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134357 4684 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134433 4684 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134443 4684 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134455 4684 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134468 4684 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134504 4684 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134537 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134612 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-slash" (OuterVolumeSpecName: "host-slash") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.133771 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134647 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-log-socket" (OuterVolumeSpecName: "log-socket") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134721 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.134062 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.136011 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.138125 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.143948 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dac888b-051f-405a-8c23-60c205d2aecc-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.144109 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dac888b-051f-405a-8c23-60c205d2aecc-kube-api-access-h68cm" (OuterVolumeSpecName: "kube-api-access-h68cm") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "kube-api-access-h68cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.145616 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jxnn6"] Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.145927 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-acl-logging" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146007 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-acl-logging" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146075 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146141 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146199 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146281 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146348 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146478 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146556 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-node" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146619 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-node" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146680 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kubecfg-setup" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146742 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kubecfg-setup" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146806 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300558-3052-4644-9616-86a87a9eacbc" containerName="collect-profiles" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.146869 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300558-3052-4644-9616-86a87a9eacbc" containerName="collect-profiles" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.146934 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147000 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.147065 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="nbdb" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147126 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="nbdb" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.147194 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147256 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.147325 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147408 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.147474 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="sbdb" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147539 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="sbdb" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.147606 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="northd" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147664 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="northd" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.147842 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148424 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="sbdb" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148500 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300558-3052-4644-9616-86a87a9eacbc" containerName="collect-profiles" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148574 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="northd" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148645 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148743 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148814 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovn-acl-logging" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148885 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.148949 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="kube-rbac-proxy-node" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.149013 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="nbdb" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.149078 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.149144 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.149316 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.149427 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.149603 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" containerName="ovnkube-controller" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.151696 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.152466 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8dac888b-051f-405a-8c23-60c205d2aecc" (UID: "8dac888b-051f-405a-8c23-60c205d2aecc"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235631 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-etc-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235667 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-ovnkube-script-lib\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235684 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-slash\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235707 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-run-netns\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235729 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-cni-bin\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235745 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/716ec6e6-cde7-4231-a1e5-753834c32812-ovn-node-metrics-cert\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-node-log\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235784 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-ovnkube-config\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235804 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-systemd-units\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235822 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235841 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-var-lib-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235858 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-cni-netd\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235872 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8shg2\" (UniqueName: \"kubernetes.io/projected/716ec6e6-cde7-4231-a1e5-753834c32812-kube-api-access-8shg2\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235888 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-systemd\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235906 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-log-socket\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235924 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235942 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-env-overrides\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235958 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-kubelet\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235975 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-run-ovn-kubernetes\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.235995 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-ovn\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236029 4684 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dac888b-051f-405a-8c23-60c205d2aecc-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236040 4684 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236049 4684 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236057 4684 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236066 4684 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236074 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h68cm\" (UniqueName: \"kubernetes.io/projected/8dac888b-051f-405a-8c23-60c205d2aecc-kube-api-access-h68cm\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236083 4684 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236091 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236099 4684 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dac888b-051f-405a-8c23-60c205d2aecc-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236107 4684 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.236116 4684 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dac888b-051f-405a-8c23-60c205d2aecc-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.337826 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-log-socket\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.337945 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.337964 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-log-socket\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338003 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-env-overrides\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338077 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-kubelet\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338118 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338125 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-run-ovn-kubernetes\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338261 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-ovn\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338169 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-run-ovn-kubernetes\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338344 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-etc-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338461 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-kubelet\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338344 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-ovn\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338416 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-etc-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338513 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-ovnkube-script-lib\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338733 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-slash\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338811 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-run-netns\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338855 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-slash\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338890 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-cni-bin\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338918 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-run-netns\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338928 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/716ec6e6-cde7-4231-a1e5-753834c32812-ovn-node-metrics-cert\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338965 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-cni-bin\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.338984 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-node-log\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339026 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-ovnkube-config\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339119 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-systemd-units\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339188 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-systemd-units\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339115 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-node-log\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339278 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339428 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-var-lib-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339451 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-ovnkube-script-lib\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339486 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-var-lib-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339320 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-env-overrides\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339433 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-openvswitch\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339530 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-cni-netd\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339580 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-host-cni-netd\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339627 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8shg2\" (UniqueName: \"kubernetes.io/projected/716ec6e6-cde7-4231-a1e5-753834c32812-kube-api-access-8shg2\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339693 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-systemd\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.339823 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/716ec6e6-cde7-4231-a1e5-753834c32812-run-systemd\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.340164 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/716ec6e6-cde7-4231-a1e5-753834c32812-ovnkube-config\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.343568 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/716ec6e6-cde7-4231-a1e5-753834c32812-ovn-node-metrics-cert\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.365003 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8shg2\" (UniqueName: \"kubernetes.io/projected/716ec6e6-cde7-4231-a1e5-753834c32812-kube-api-access-8shg2\") pod \"ovnkube-node-jxnn6\" (UID: \"716ec6e6-cde7-4231-a1e5-753834c32812\") " pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.482713 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:07 crc kubenswrapper[4684]: W0121 10:17:07.511025 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod716ec6e6_cde7_4231_a1e5_753834c32812.slice/crio-eaca20b37867ce456ac3ea88f25fe87a9256014f317c252c407d4d833d238a7f WatchSource:0}: Error finding container eaca20b37867ce456ac3ea88f25fe87a9256014f317c252c407d4d833d238a7f: Status 404 returned error can't find the container with id eaca20b37867ce456ac3ea88f25fe87a9256014f317c252c407d4d833d238a7f Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.836692 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovnkube-controller/3.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.839201 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovn-acl-logging/0.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.839837 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6vjwl_8dac888b-051f-405a-8c23-60c205d2aecc/ovn-controller/0.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840179 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840205 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840214 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840213 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840267 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840282 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840294 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840298 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840224 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840393 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840404 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840405 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840412 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" exitCode=143 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840424 4684 generic.go:334] "Generic (PLEG): container finished" podID="8dac888b-051f-405a-8c23-60c205d2aecc" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" exitCode=143 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840425 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840439 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840463 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840476 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840483 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840512 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840520 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840527 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840533 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840539 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840549 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840560 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840594 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840601 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840608 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840615 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840621 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840628 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840637 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840667 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840675 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840684 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840695 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840702 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840708 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840714 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840720 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840752 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840759 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840765 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840771 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840778 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840787 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6vjwl" event={"ID":"8dac888b-051f-405a-8c23-60c205d2aecc","Type":"ContainerDied","Data":"456e71751215fd65a71e44af5d6eaa47482e203d27ef9a98426569053bf5c932"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840315 4684 scope.go:117] "RemoveContainer" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840803 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840909 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840917 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840923 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840930 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840936 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840942 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840948 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840980 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.840987 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.841784 4684 generic.go:334] "Generic (PLEG): container finished" podID="716ec6e6-cde7-4231-a1e5-753834c32812" containerID="a0dd7557038490c6d618b3f415a68c7b33dc169304fb1b37afaf6a3987d16848" exitCode=0 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.841823 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerDied","Data":"a0dd7557038490c6d618b3f415a68c7b33dc169304fb1b37afaf6a3987d16848"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.841842 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"eaca20b37867ce456ac3ea88f25fe87a9256014f317c252c407d4d833d238a7f"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.848833 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/2.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.849630 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/1.log" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.849676 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e7ac4c6-b960-418c-b057-e55d95a213cd" containerID="3da3e62c432812c504b13fff6b927c1f9c0e2d04accb6e450bd21504262c7eaf" exitCode=2 Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.849711 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerDied","Data":"3da3e62c432812c504b13fff6b927c1f9c0e2d04accb6e450bd21504262c7eaf"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.849738 4684 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d"} Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.850231 4684 scope.go:117] "RemoveContainer" containerID="3da3e62c432812c504b13fff6b927c1f9c0e2d04accb6e450bd21504262c7eaf" Jan 21 10:17:07 crc kubenswrapper[4684]: E0121 10:17:07.850457 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6jwd4_openshift-multus(1e7ac4c6-b960-418c-b057-e55d95a213cd)\"" pod="openshift-multus/multus-6jwd4" podUID="1e7ac4c6-b960-418c-b057-e55d95a213cd" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.860156 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.881632 4684 scope.go:117] "RemoveContainer" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.908025 4684 scope.go:117] "RemoveContainer" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.916821 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6vjwl"] Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.920001 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6vjwl"] Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.958536 4684 scope.go:117] "RemoveContainer" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.973795 4684 scope.go:117] "RemoveContainer" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" Jan 21 10:17:07 crc kubenswrapper[4684]: I0121 10:17:07.991708 4684 scope.go:117] "RemoveContainer" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.004466 4684 scope.go:117] "RemoveContainer" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.017240 4684 scope.go:117] "RemoveContainer" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.030355 4684 scope.go:117] "RemoveContainer" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.047527 4684 scope.go:117] "RemoveContainer" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.047902 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": container with ID starting with 209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae not found: ID does not exist" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.047936 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} err="failed to get container status \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": rpc error: code = NotFound desc = could not find container \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": container with ID starting with 209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.047963 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.048307 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": container with ID starting with 4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6 not found: ID does not exist" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.048347 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} err="failed to get container status \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": rpc error: code = NotFound desc = could not find container \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": container with ID starting with 4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.048390 4684 scope.go:117] "RemoveContainer" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.048645 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": container with ID starting with 24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294 not found: ID does not exist" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.048674 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} err="failed to get container status \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": rpc error: code = NotFound desc = could not find container \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": container with ID starting with 24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.048690 4684 scope.go:117] "RemoveContainer" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.049019 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": container with ID starting with 6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e not found: ID does not exist" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.049075 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} err="failed to get container status \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": rpc error: code = NotFound desc = could not find container \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": container with ID starting with 6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.049113 4684 scope.go:117] "RemoveContainer" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.049479 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": container with ID starting with 0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29 not found: ID does not exist" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.049509 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} err="failed to get container status \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": rpc error: code = NotFound desc = could not find container \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": container with ID starting with 0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.049527 4684 scope.go:117] "RemoveContainer" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.049842 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": container with ID starting with 0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b not found: ID does not exist" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.049876 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} err="failed to get container status \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": rpc error: code = NotFound desc = could not find container \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": container with ID starting with 0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.049901 4684 scope.go:117] "RemoveContainer" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.050306 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": container with ID starting with ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124 not found: ID does not exist" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.050336 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} err="failed to get container status \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": rpc error: code = NotFound desc = could not find container \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": container with ID starting with ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.050355 4684 scope.go:117] "RemoveContainer" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.050836 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": container with ID starting with 0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c not found: ID does not exist" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.050863 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} err="failed to get container status \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": rpc error: code = NotFound desc = could not find container \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": container with ID starting with 0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.050879 4684 scope.go:117] "RemoveContainer" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.051201 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": container with ID starting with 472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544 not found: ID does not exist" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.051234 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} err="failed to get container status \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": rpc error: code = NotFound desc = could not find container \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": container with ID starting with 472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.051253 4684 scope.go:117] "RemoveContainer" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" Jan 21 10:17:08 crc kubenswrapper[4684]: E0121 10:17:08.051544 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": container with ID starting with 225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed not found: ID does not exist" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.051569 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} err="failed to get container status \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": rpc error: code = NotFound desc = could not find container \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": container with ID starting with 225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.051587 4684 scope.go:117] "RemoveContainer" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.051850 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} err="failed to get container status \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": rpc error: code = NotFound desc = could not find container \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": container with ID starting with 209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.051870 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.052134 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} err="failed to get container status \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": rpc error: code = NotFound desc = could not find container \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": container with ID starting with 4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.052165 4684 scope.go:117] "RemoveContainer" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.052454 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} err="failed to get container status \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": rpc error: code = NotFound desc = could not find container \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": container with ID starting with 24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.052484 4684 scope.go:117] "RemoveContainer" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.052772 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} err="failed to get container status \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": rpc error: code = NotFound desc = could not find container \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": container with ID starting with 6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.052798 4684 scope.go:117] "RemoveContainer" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053065 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} err="failed to get container status \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": rpc error: code = NotFound desc = could not find container \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": container with ID starting with 0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053084 4684 scope.go:117] "RemoveContainer" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053332 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} err="failed to get container status \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": rpc error: code = NotFound desc = could not find container \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": container with ID starting with 0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053349 4684 scope.go:117] "RemoveContainer" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053575 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} err="failed to get container status \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": rpc error: code = NotFound desc = could not find container \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": container with ID starting with ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053605 4684 scope.go:117] "RemoveContainer" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053951 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} err="failed to get container status \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": rpc error: code = NotFound desc = could not find container \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": container with ID starting with 0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.053971 4684 scope.go:117] "RemoveContainer" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.054360 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} err="failed to get container status \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": rpc error: code = NotFound desc = could not find container \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": container with ID starting with 472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.054392 4684 scope.go:117] "RemoveContainer" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.054666 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} err="failed to get container status \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": rpc error: code = NotFound desc = could not find container \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": container with ID starting with 225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.054685 4684 scope.go:117] "RemoveContainer" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.055284 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} err="failed to get container status \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": rpc error: code = NotFound desc = could not find container \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": container with ID starting with 209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.055317 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.055612 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} err="failed to get container status \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": rpc error: code = NotFound desc = could not find container \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": container with ID starting with 4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.055635 4684 scope.go:117] "RemoveContainer" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.056112 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} err="failed to get container status \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": rpc error: code = NotFound desc = could not find container \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": container with ID starting with 24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.056141 4684 scope.go:117] "RemoveContainer" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.056525 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} err="failed to get container status \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": rpc error: code = NotFound desc = could not find container \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": container with ID starting with 6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.056550 4684 scope.go:117] "RemoveContainer" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.056853 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} err="failed to get container status \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": rpc error: code = NotFound desc = could not find container \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": container with ID starting with 0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.056873 4684 scope.go:117] "RemoveContainer" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.057128 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} err="failed to get container status \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": rpc error: code = NotFound desc = could not find container \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": container with ID starting with 0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.057143 4684 scope.go:117] "RemoveContainer" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.057443 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} err="failed to get container status \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": rpc error: code = NotFound desc = could not find container \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": container with ID starting with ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.057469 4684 scope.go:117] "RemoveContainer" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.057819 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} err="failed to get container status \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": rpc error: code = NotFound desc = could not find container \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": container with ID starting with 0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.057848 4684 scope.go:117] "RemoveContainer" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058139 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} err="failed to get container status \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": rpc error: code = NotFound desc = could not find container \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": container with ID starting with 472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058158 4684 scope.go:117] "RemoveContainer" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058447 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} err="failed to get container status \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": rpc error: code = NotFound desc = could not find container \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": container with ID starting with 225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058474 4684 scope.go:117] "RemoveContainer" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058679 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} err="failed to get container status \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": rpc error: code = NotFound desc = could not find container \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": container with ID starting with 209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058705 4684 scope.go:117] "RemoveContainer" containerID="4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058960 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6"} err="failed to get container status \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": rpc error: code = NotFound desc = could not find container \"4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6\": container with ID starting with 4e9fb827b6c473a55160608247e657c1b08b63678d4386e004f6d8b2dccfe6b6 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.058985 4684 scope.go:117] "RemoveContainer" containerID="24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.059214 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294"} err="failed to get container status \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": rpc error: code = NotFound desc = could not find container \"24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294\": container with ID starting with 24925dc513a6e99e7bb5f9cca61fabd679bb93ba852060fbf1309e803e8a2294 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.059237 4684 scope.go:117] "RemoveContainer" containerID="6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.059524 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e"} err="failed to get container status \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": rpc error: code = NotFound desc = could not find container \"6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e\": container with ID starting with 6b1e1e8ee3f57153d099acc6cdadd8de9a8cc85120a898c1d3a9a4909125cd6e not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.059547 4684 scope.go:117] "RemoveContainer" containerID="0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.059824 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29"} err="failed to get container status \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": rpc error: code = NotFound desc = could not find container \"0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29\": container with ID starting with 0127045e4cd7b97c184c80ffafcd7fff7b338c558d0ace76055f283c6da53a29 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.059845 4684 scope.go:117] "RemoveContainer" containerID="0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060044 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b"} err="failed to get container status \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": rpc error: code = NotFound desc = could not find container \"0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b\": container with ID starting with 0dc4d72878ff7ced55e5ce4455982bf4aa348ccc304207e10e6c1176af0bf36b not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060064 4684 scope.go:117] "RemoveContainer" containerID="ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060379 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124"} err="failed to get container status \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": rpc error: code = NotFound desc = could not find container \"ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124\": container with ID starting with ca4bf59b55562d2b40253e7b4c56e07771fb5510b21faf5e8305f92e3908d124 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060402 4684 scope.go:117] "RemoveContainer" containerID="0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060652 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c"} err="failed to get container status \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": rpc error: code = NotFound desc = could not find container \"0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c\": container with ID starting with 0f768bf8ad0fb119172788f34acd50d26bcfd36965ffa6456297c471b2388e7c not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060676 4684 scope.go:117] "RemoveContainer" containerID="472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060924 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544"} err="failed to get container status \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": rpc error: code = NotFound desc = could not find container \"472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544\": container with ID starting with 472d896dc5f177e67fe632a5cb9a87e112e3f751c4e4510078517a7944881544 not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.060974 4684 scope.go:117] "RemoveContainer" containerID="225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.061312 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed"} err="failed to get container status \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": rpc error: code = NotFound desc = could not find container \"225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed\": container with ID starting with 225138e5afe8312c34b7bbabd00ff2a3714d5b1fecb33cf02cbeeb8ed79c86ed not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.061331 4684 scope.go:117] "RemoveContainer" containerID="209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.061623 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae"} err="failed to get container status \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": rpc error: code = NotFound desc = could not find container \"209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae\": container with ID starting with 209670b40641a47a08d0f42d615a7faaacb25ff8fb66ae97abf239a2b18a21ae not found: ID does not exist" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.523169 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dac888b-051f-405a-8c23-60c205d2aecc" path="/var/lib/kubelet/pods/8dac888b-051f-405a-8c23-60c205d2aecc/volumes" Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.868865 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"fef9c062caad0f91520aa9d7257605ba5da8dad4dc6ba23c4a6752073b3ef4a2"} Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.868922 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"d2bc9df6ba9194b8a22f6a56c99b5f50cf8c9e5167bb2363a647f50e2d30a1bc"} Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.868935 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"dc803c3e09f7a11f683c55a6b19b707c80f80c51fc9c99ae5cec75df72e058c9"} Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.868945 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"a8cd63012685c2e68c3e3dc970dc1be946bfd81e31f4396f6530ae1e25f591ca"} Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.868961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"9cf6504b805225af174830511d1e7b0a136ac1baa6c71a315e9015179057cf90"} Jan 21 10:17:08 crc kubenswrapper[4684]: I0121 10:17:08.868970 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"d6a2cea053516c1452c6972b9322544ca962832154c8720550a0b86100f1c8c7"} Jan 21 10:17:10 crc kubenswrapper[4684]: I0121 10:17:10.885797 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"7c9902b6251a73c78efe43912a02adbc52adcb58a3128fa7ab77e72cd227cb86"} Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.916305 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" event={"ID":"716ec6e6-cde7-4231-a1e5-753834c32812","Type":"ContainerStarted","Data":"69bd8f191214330035ffc832e200efc375ac43edc7725b741e263fed04946867"} Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.917781 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.917907 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.918186 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.945044 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.946508 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:17:13 crc kubenswrapper[4684]: I0121 10:17:13.949445 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" podStartSLOduration=6.949427771 podStartE2EDuration="6.949427771s" podCreationTimestamp="2026-01-21 10:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:17:13.946737739 +0000 UTC m=+671.704820706" watchObservedRunningTime="2026-01-21 10:17:13.949427771 +0000 UTC m=+671.707510738" Jan 21 10:17:22 crc kubenswrapper[4684]: I0121 10:17:22.516963 4684 scope.go:117] "RemoveContainer" containerID="3da3e62c432812c504b13fff6b927c1f9c0e2d04accb6e450bd21504262c7eaf" Jan 21 10:17:22 crc kubenswrapper[4684]: E0121 10:17:22.517731 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6jwd4_openshift-multus(1e7ac4c6-b960-418c-b057-e55d95a213cd)\"" pod="openshift-multus/multus-6jwd4" podUID="1e7ac4c6-b960-418c-b057-e55d95a213cd" Jan 21 10:17:36 crc kubenswrapper[4684]: I0121 10:17:36.515783 4684 scope.go:117] "RemoveContainer" containerID="3da3e62c432812c504b13fff6b927c1f9c0e2d04accb6e450bd21504262c7eaf" Jan 21 10:17:37 crc kubenswrapper[4684]: I0121 10:17:37.046747 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/2.log" Jan 21 10:17:37 crc kubenswrapper[4684]: I0121 10:17:37.047763 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/1.log" Jan 21 10:17:37 crc kubenswrapper[4684]: I0121 10:17:37.047863 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6jwd4" event={"ID":"1e7ac4c6-b960-418c-b057-e55d95a213cd","Type":"ContainerStarted","Data":"b8cc2fdea7655b66fcca2de9fb26da73f0c2b3626bd63b2eab762e4fdc307276"} Jan 21 10:17:37 crc kubenswrapper[4684]: I0121 10:17:37.507328 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jxnn6" Jan 21 10:18:02 crc kubenswrapper[4684]: I0121 10:18:02.777350 4684 scope.go:117] "RemoveContainer" containerID="ecdbfb9dca303cd8a667876237175bf17e36baff120b184253fc01c630c37b8d" Jan 21 10:18:03 crc kubenswrapper[4684]: I0121 10:18:03.225625 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/2.log" Jan 21 10:18:07 crc kubenswrapper[4684]: I0121 10:18:07.303181 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:18:07 crc kubenswrapper[4684]: I0121 10:18:07.303571 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:18:13 crc kubenswrapper[4684]: I0121 10:18:13.799740 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79ws"] Jan 21 10:18:13 crc kubenswrapper[4684]: I0121 10:18:13.802512 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q79ws" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="registry-server" containerID="cri-o://32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb" gracePeriod=30 Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.180315 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.297712 4684 generic.go:334] "Generic (PLEG): container finished" podID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerID="32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb" exitCode=0 Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.297774 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerDied","Data":"32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb"} Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.297822 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79ws" event={"ID":"f9f898be-f7bb-48e5-b345-cc483a249a54","Type":"ContainerDied","Data":"4c2093aa3dabb1d4bf511a37a59f6a02970d49c934e706982958e3700e4dbfae"} Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.297833 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79ws" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.297846 4684 scope.go:117] "RemoveContainer" containerID="32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.320323 4684 scope.go:117] "RemoveContainer" containerID="084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.345328 4684 scope.go:117] "RemoveContainer" containerID="c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.345656 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjg9h\" (UniqueName: \"kubernetes.io/projected/f9f898be-f7bb-48e5-b345-cc483a249a54-kube-api-access-wjg9h\") pod \"f9f898be-f7bb-48e5-b345-cc483a249a54\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.345695 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-utilities\") pod \"f9f898be-f7bb-48e5-b345-cc483a249a54\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.345717 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-catalog-content\") pod \"f9f898be-f7bb-48e5-b345-cc483a249a54\" (UID: \"f9f898be-f7bb-48e5-b345-cc483a249a54\") " Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.347063 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-utilities" (OuterVolumeSpecName: "utilities") pod "f9f898be-f7bb-48e5-b345-cc483a249a54" (UID: "f9f898be-f7bb-48e5-b345-cc483a249a54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.353083 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f898be-f7bb-48e5-b345-cc483a249a54-kube-api-access-wjg9h" (OuterVolumeSpecName: "kube-api-access-wjg9h") pod "f9f898be-f7bb-48e5-b345-cc483a249a54" (UID: "f9f898be-f7bb-48e5-b345-cc483a249a54"). InnerVolumeSpecName "kube-api-access-wjg9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.393890 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9f898be-f7bb-48e5-b345-cc483a249a54" (UID: "f9f898be-f7bb-48e5-b345-cc483a249a54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.394771 4684 scope.go:117] "RemoveContainer" containerID="32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb" Jan 21 10:18:14 crc kubenswrapper[4684]: E0121 10:18:14.395340 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb\": container with ID starting with 32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb not found: ID does not exist" containerID="32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.395437 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb"} err="failed to get container status \"32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb\": rpc error: code = NotFound desc = could not find container \"32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb\": container with ID starting with 32b6ddd52713325c462f954b76472e15b753b063622b41a98ce5c48031aaa9eb not found: ID does not exist" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.395474 4684 scope.go:117] "RemoveContainer" containerID="084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61" Jan 21 10:18:14 crc kubenswrapper[4684]: E0121 10:18:14.395938 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61\": container with ID starting with 084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61 not found: ID does not exist" containerID="084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.395976 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61"} err="failed to get container status \"084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61\": rpc error: code = NotFound desc = could not find container \"084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61\": container with ID starting with 084c16a6936109ded891e473590bc57e30de0ae7e687e5da4093be4b46a5ee61 not found: ID does not exist" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.396003 4684 scope.go:117] "RemoveContainer" containerID="c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3" Jan 21 10:18:14 crc kubenswrapper[4684]: E0121 10:18:14.396522 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3\": container with ID starting with c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3 not found: ID does not exist" containerID="c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.396591 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3"} err="failed to get container status \"c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3\": rpc error: code = NotFound desc = could not find container \"c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3\": container with ID starting with c5faf20d6c7d1d77e0b1bb3ee6a2c7b18f80766e19aaf7aeca8b19f6171335e3 not found: ID does not exist" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.447850 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjg9h\" (UniqueName: \"kubernetes.io/projected/f9f898be-f7bb-48e5-b345-cc483a249a54-kube-api-access-wjg9h\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.447931 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.447946 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9f898be-f7bb-48e5-b345-cc483a249a54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.621124 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79ws"] Jan 21 10:18:14 crc kubenswrapper[4684]: I0121 10:18:14.626057 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79ws"] Jan 21 10:18:16 crc kubenswrapper[4684]: I0121 10:18:16.520137 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" path="/var/lib/kubelet/pods/f9f898be-f7bb-48e5-b345-cc483a249a54/volumes" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.556607 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq"] Jan 21 10:18:17 crc kubenswrapper[4684]: E0121 10:18:17.557670 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="extract-content" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.557708 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="extract-content" Jan 21 10:18:17 crc kubenswrapper[4684]: E0121 10:18:17.557719 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="extract-utilities" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.557726 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="extract-utilities" Jan 21 10:18:17 crc kubenswrapper[4684]: E0121 10:18:17.557746 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="registry-server" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.557752 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="registry-server" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.557838 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f898be-f7bb-48e5-b345-cc483a249a54" containerName="registry-server" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.558573 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.560467 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.566785 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq"] Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.689329 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs2tr\" (UniqueName: \"kubernetes.io/projected/061544f4-badf-4c7b-a325-e582fa4e2451-kube-api-access-hs2tr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.689444 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.689545 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.791053 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.791137 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs2tr\" (UniqueName: \"kubernetes.io/projected/061544f4-badf-4c7b-a325-e582fa4e2451-kube-api-access-hs2tr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.791159 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.791663 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.791663 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.813455 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs2tr\" (UniqueName: \"kubernetes.io/projected/061544f4-badf-4c7b-a325-e582fa4e2451-kube-api-access-hs2tr\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:17 crc kubenswrapper[4684]: I0121 10:18:17.883061 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:18 crc kubenswrapper[4684]: I0121 10:18:18.099307 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq"] Jan 21 10:18:18 crc kubenswrapper[4684]: I0121 10:18:18.321142 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" event={"ID":"061544f4-badf-4c7b-a325-e582fa4e2451","Type":"ContainerStarted","Data":"06580c1d4f6a655ff6e6b28512fb254cff91023f2cd226fac1be6f3d610c300f"} Jan 21 10:18:18 crc kubenswrapper[4684]: I0121 10:18:18.321207 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" event={"ID":"061544f4-badf-4c7b-a325-e582fa4e2451","Type":"ContainerStarted","Data":"4fd14e855a20c4f78e75af69a83b3c3457879b603c4e6677461fd4bdfcc70cea"} Jan 21 10:18:19 crc kubenswrapper[4684]: I0121 10:18:19.331046 4684 generic.go:334] "Generic (PLEG): container finished" podID="061544f4-badf-4c7b-a325-e582fa4e2451" containerID="06580c1d4f6a655ff6e6b28512fb254cff91023f2cd226fac1be6f3d610c300f" exitCode=0 Jan 21 10:18:19 crc kubenswrapper[4684]: I0121 10:18:19.331106 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" event={"ID":"061544f4-badf-4c7b-a325-e582fa4e2451","Type":"ContainerDied","Data":"06580c1d4f6a655ff6e6b28512fb254cff91023f2cd226fac1be6f3d610c300f"} Jan 21 10:18:19 crc kubenswrapper[4684]: I0121 10:18:19.334088 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:18:22 crc kubenswrapper[4684]: I0121 10:18:22.356382 4684 generic.go:334] "Generic (PLEG): container finished" podID="061544f4-badf-4c7b-a325-e582fa4e2451" containerID="ace041358145adc8cd2755928592f2333c0d9ec1e090fa8d4f47541b143cecaf" exitCode=0 Jan 21 10:18:22 crc kubenswrapper[4684]: I0121 10:18:22.357806 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" event={"ID":"061544f4-badf-4c7b-a325-e582fa4e2451","Type":"ContainerDied","Data":"ace041358145adc8cd2755928592f2333c0d9ec1e090fa8d4f47541b143cecaf"} Jan 21 10:18:23 crc kubenswrapper[4684]: I0121 10:18:23.367027 4684 generic.go:334] "Generic (PLEG): container finished" podID="061544f4-badf-4c7b-a325-e582fa4e2451" containerID="e77efe6a2daf5248b10cee3e694d45d6b4412803a7515acad3cbb8634fad467a" exitCode=0 Jan 21 10:18:23 crc kubenswrapper[4684]: I0121 10:18:23.367069 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" event={"ID":"061544f4-badf-4c7b-a325-e582fa4e2451","Type":"ContainerDied","Data":"e77efe6a2daf5248b10cee3e694d45d6b4412803a7515acad3cbb8634fad467a"} Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.560054 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw"] Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.561728 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.572664 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw"] Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.581966 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.582012 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcslt\" (UniqueName: \"kubernetes.io/projected/51e6c957-888f-4a6f-a18a-d13b6d901c78-kube-api-access-dcslt\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.582042 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.634377 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.683752 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.683798 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcslt\" (UniqueName: \"kubernetes.io/projected/51e6c957-888f-4a6f-a18a-d13b6d901c78-kube-api-access-dcslt\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.683837 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.684530 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.684753 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.703282 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcslt\" (UniqueName: \"kubernetes.io/projected/51e6c957-888f-4a6f-a18a-d13b6d901c78-kube-api-access-dcslt\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.784476 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs2tr\" (UniqueName: \"kubernetes.io/projected/061544f4-badf-4c7b-a325-e582fa4e2451-kube-api-access-hs2tr\") pod \"061544f4-badf-4c7b-a325-e582fa4e2451\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.784959 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-util\") pod \"061544f4-badf-4c7b-a325-e582fa4e2451\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.785136 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-bundle\") pod \"061544f4-badf-4c7b-a325-e582fa4e2451\" (UID: \"061544f4-badf-4c7b-a325-e582fa4e2451\") " Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.787572 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061544f4-badf-4c7b-a325-e582fa4e2451-kube-api-access-hs2tr" (OuterVolumeSpecName: "kube-api-access-hs2tr") pod "061544f4-badf-4c7b-a325-e582fa4e2451" (UID: "061544f4-badf-4c7b-a325-e582fa4e2451"). InnerVolumeSpecName "kube-api-access-hs2tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.790455 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-bundle" (OuterVolumeSpecName: "bundle") pod "061544f4-badf-4c7b-a325-e582fa4e2451" (UID: "061544f4-badf-4c7b-a325-e582fa4e2451"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.798682 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-util" (OuterVolumeSpecName: "util") pod "061544f4-badf-4c7b-a325-e582fa4e2451" (UID: "061544f4-badf-4c7b-a325-e582fa4e2451"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.886585 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.886621 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs2tr\" (UniqueName: \"kubernetes.io/projected/061544f4-badf-4c7b-a325-e582fa4e2451-kube-api-access-hs2tr\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.886637 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/061544f4-badf-4c7b-a325-e582fa4e2451-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:24 crc kubenswrapper[4684]: I0121 10:18:24.888309 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:25 crc kubenswrapper[4684]: I0121 10:18:25.114177 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw"] Jan 21 10:18:25 crc kubenswrapper[4684]: W0121 10:18:25.118920 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51e6c957_888f_4a6f_a18a_d13b6d901c78.slice/crio-32e9709f3bda1f0ed50cf626e034f3e81d9c6dd5f62032dc18c642615ef76e19 WatchSource:0}: Error finding container 32e9709f3bda1f0ed50cf626e034f3e81d9c6dd5f62032dc18c642615ef76e19: Status 404 returned error can't find the container with id 32e9709f3bda1f0ed50cf626e034f3e81d9c6dd5f62032dc18c642615ef76e19 Jan 21 10:18:25 crc kubenswrapper[4684]: I0121 10:18:25.382728 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" event={"ID":"51e6c957-888f-4a6f-a18a-d13b6d901c78","Type":"ContainerStarted","Data":"32e9709f3bda1f0ed50cf626e034f3e81d9c6dd5f62032dc18c642615ef76e19"} Jan 21 10:18:25 crc kubenswrapper[4684]: I0121 10:18:25.390556 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" event={"ID":"061544f4-badf-4c7b-a325-e582fa4e2451","Type":"ContainerDied","Data":"4fd14e855a20c4f78e75af69a83b3c3457879b603c4e6677461fd4bdfcc70cea"} Jan 21 10:18:25 crc kubenswrapper[4684]: I0121 10:18:25.390594 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fd14e855a20c4f78e75af69a83b3c3457879b603c4e6677461fd4bdfcc70cea" Jan 21 10:18:25 crc kubenswrapper[4684]: I0121 10:18:25.390666 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq" Jan 21 10:18:26 crc kubenswrapper[4684]: I0121 10:18:26.400471 4684 generic.go:334] "Generic (PLEG): container finished" podID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerID="ac8c273821b8d6d089bd206bfd56f6f069c5b392b0fee8b9b256e69713d64d6d" exitCode=0 Jan 21 10:18:26 crc kubenswrapper[4684]: I0121 10:18:26.400558 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" event={"ID":"51e6c957-888f-4a6f-a18a-d13b6d901c78","Type":"ContainerDied","Data":"ac8c273821b8d6d089bd206bfd56f6f069c5b392b0fee8b9b256e69713d64d6d"} Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.413697 4684 generic.go:334] "Generic (PLEG): container finished" podID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerID="26088385d57d3fa6dfdd2d54a15a73b7535fb499a64ebffac7018f9e1940720f" exitCode=0 Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.413747 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" event={"ID":"51e6c957-888f-4a6f-a18a-d13b6d901c78","Type":"ContainerDied","Data":"26088385d57d3fa6dfdd2d54a15a73b7535fb499a64ebffac7018f9e1940720f"} Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.786445 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4"] Jan 21 10:18:28 crc kubenswrapper[4684]: E0121 10:18:28.786652 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="pull" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.786669 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="pull" Jan 21 10:18:28 crc kubenswrapper[4684]: E0121 10:18:28.786682 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="util" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.786688 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="util" Jan 21 10:18:28 crc kubenswrapper[4684]: E0121 10:18:28.786696 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="extract" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.786701 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="extract" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.786794 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="061544f4-badf-4c7b-a325-e582fa4e2451" containerName="extract" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.787444 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.807275 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4"] Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.840256 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9pwc\" (UniqueName: \"kubernetes.io/projected/1e79586b-d501-4fa0-9c2b-d612682f0c43-kube-api-access-q9pwc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.840305 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.840337 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.941608 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9pwc\" (UniqueName: \"kubernetes.io/projected/1e79586b-d501-4fa0-9c2b-d612682f0c43-kube-api-access-q9pwc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.941673 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.941715 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.942247 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.942311 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:28 crc kubenswrapper[4684]: I0121 10:18:28.968500 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9pwc\" (UniqueName: \"kubernetes.io/projected/1e79586b-d501-4fa0-9c2b-d612682f0c43-kube-api-access-q9pwc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:29 crc kubenswrapper[4684]: I0121 10:18:29.103646 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:29 crc kubenswrapper[4684]: I0121 10:18:29.448296 4684 generic.go:334] "Generic (PLEG): container finished" podID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerID="29ae949566484d6c9bca1a5687ea0cfd51cebe7e97bd470755dc16df0755a9af" exitCode=0 Jan 21 10:18:29 crc kubenswrapper[4684]: I0121 10:18:29.448335 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" event={"ID":"51e6c957-888f-4a6f-a18a-d13b6d901c78","Type":"ContainerDied","Data":"29ae949566484d6c9bca1a5687ea0cfd51cebe7e97bd470755dc16df0755a9af"} Jan 21 10:18:29 crc kubenswrapper[4684]: I0121 10:18:29.609074 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4"] Jan 21 10:18:29 crc kubenswrapper[4684]: W0121 10:18:29.617068 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e79586b_d501_4fa0_9c2b_d612682f0c43.slice/crio-018d757f25f773da8f7e5ec27b53132f65cfd99fc2eb505eb3f7d06ed69547b0 WatchSource:0}: Error finding container 018d757f25f773da8f7e5ec27b53132f65cfd99fc2eb505eb3f7d06ed69547b0: Status 404 returned error can't find the container with id 018d757f25f773da8f7e5ec27b53132f65cfd99fc2eb505eb3f7d06ed69547b0 Jan 21 10:18:30 crc kubenswrapper[4684]: I0121 10:18:30.454425 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerID="bd0c6c881e7c99c0f9abde80c0a33b3bd1ee157b40cb7531907201f0a592edf5" exitCode=0 Jan 21 10:18:30 crc kubenswrapper[4684]: I0121 10:18:30.454489 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerDied","Data":"bd0c6c881e7c99c0f9abde80c0a33b3bd1ee157b40cb7531907201f0a592edf5"} Jan 21 10:18:30 crc kubenswrapper[4684]: I0121 10:18:30.454546 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerStarted","Data":"018d757f25f773da8f7e5ec27b53132f65cfd99fc2eb505eb3f7d06ed69547b0"} Jan 21 10:18:30 crc kubenswrapper[4684]: I0121 10:18:30.868073 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.069311 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-bundle\") pod \"51e6c957-888f-4a6f-a18a-d13b6d901c78\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.069450 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-util\") pod \"51e6c957-888f-4a6f-a18a-d13b6d901c78\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.069495 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcslt\" (UniqueName: \"kubernetes.io/projected/51e6c957-888f-4a6f-a18a-d13b6d901c78-kube-api-access-dcslt\") pod \"51e6c957-888f-4a6f-a18a-d13b6d901c78\" (UID: \"51e6c957-888f-4a6f-a18a-d13b6d901c78\") " Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.070347 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-bundle" (OuterVolumeSpecName: "bundle") pod "51e6c957-888f-4a6f-a18a-d13b6d901c78" (UID: "51e6c957-888f-4a6f-a18a-d13b6d901c78"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.078470 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e6c957-888f-4a6f-a18a-d13b6d901c78-kube-api-access-dcslt" (OuterVolumeSpecName: "kube-api-access-dcslt") pod "51e6c957-888f-4a6f-a18a-d13b6d901c78" (UID: "51e6c957-888f-4a6f-a18a-d13b6d901c78"). InnerVolumeSpecName "kube-api-access-dcslt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.096745 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-util" (OuterVolumeSpecName: "util") pod "51e6c957-888f-4a6f-a18a-d13b6d901c78" (UID: "51e6c957-888f-4a6f-a18a-d13b6d901c78"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.170919 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.170969 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcslt\" (UniqueName: \"kubernetes.io/projected/51e6c957-888f-4a6f-a18a-d13b6d901c78-kube-api-access-dcslt\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.170987 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/51e6c957-888f-4a6f-a18a-d13b6d901c78-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.460878 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" event={"ID":"51e6c957-888f-4a6f-a18a-d13b6d901c78","Type":"ContainerDied","Data":"32e9709f3bda1f0ed50cf626e034f3e81d9c6dd5f62032dc18c642615ef76e19"} Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.460934 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e9709f3bda1f0ed50cf626e034f3e81d9c6dd5f62032dc18c642615ef76e19" Jan 21 10:18:31 crc kubenswrapper[4684]: I0121 10:18:31.460953 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.906010 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zzwsr"] Jan 21 10:18:32 crc kubenswrapper[4684]: E0121 10:18:32.906589 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="pull" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.906605 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="pull" Jan 21 10:18:32 crc kubenswrapper[4684]: E0121 10:18:32.906622 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="util" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.906629 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="util" Jan 21 10:18:32 crc kubenswrapper[4684]: E0121 10:18:32.906642 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="extract" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.906649 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="extract" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.906769 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e6c957-888f-4a6f-a18a-d13b6d901c78" containerName="extract" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.907642 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:32 crc kubenswrapper[4684]: I0121 10:18:32.932625 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzwsr"] Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.092953 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-utilities\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.093021 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8gl\" (UniqueName: \"kubernetes.io/projected/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-kube-api-access-jc8gl\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.093047 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-catalog-content\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.195317 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-utilities\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.195481 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8gl\" (UniqueName: \"kubernetes.io/projected/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-kube-api-access-jc8gl\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.195503 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-catalog-content\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.195972 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-utilities\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.196065 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-catalog-content\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.213168 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8gl\" (UniqueName: \"kubernetes.io/projected/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-kube-api-access-jc8gl\") pod \"certified-operators-zzwsr\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.220430 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.320535 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j88l9"] Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.321798 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.368533 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j88l9"] Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.504081 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtgq9\" (UniqueName: \"kubernetes.io/projected/a3c57917-2d13-4511-80fb-31950b741141-kube-api-access-wtgq9\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.504164 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-utilities\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.504194 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-catalog-content\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.548110 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zzwsr"] Jan 21 10:18:33 crc kubenswrapper[4684]: W0121 10:18:33.559126 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae078e6_e1ed_4026_8fcd_b1bb935afe23.slice/crio-59471cda137362146b0761f4a6413af56948e5e5598bafec52b6b0170f705501 WatchSource:0}: Error finding container 59471cda137362146b0761f4a6413af56948e5e5598bafec52b6b0170f705501: Status 404 returned error can't find the container with id 59471cda137362146b0761f4a6413af56948e5e5598bafec52b6b0170f705501 Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.605712 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-utilities\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.605770 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-catalog-content\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.605842 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtgq9\" (UniqueName: \"kubernetes.io/projected/a3c57917-2d13-4511-80fb-31950b741141-kube-api-access-wtgq9\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.606302 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-utilities\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.606348 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-catalog-content\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.629284 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtgq9\" (UniqueName: \"kubernetes.io/projected/a3c57917-2d13-4511-80fb-31950b741141-kube-api-access-wtgq9\") pod \"redhat-operators-j88l9\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.651630 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:33 crc kubenswrapper[4684]: I0121 10:18:33.941574 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j88l9"] Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.483425 4684 generic.go:334] "Generic (PLEG): container finished" podID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerID="9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14" exitCode=0 Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.483501 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerDied","Data":"9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14"} Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.483574 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerStarted","Data":"59471cda137362146b0761f4a6413af56948e5e5598bafec52b6b0170f705501"} Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.484799 4684 generic.go:334] "Generic (PLEG): container finished" podID="a3c57917-2d13-4511-80fb-31950b741141" containerID="dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60" exitCode=0 Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.484895 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerDied","Data":"dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60"} Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.485077 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerStarted","Data":"8a7c0a5d3f7e4437ef2a9ecd1e89cf10cfb71cec910417ae1d68a7a4f652bbb8"} Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.840946 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc"] Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.841776 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.851914 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-k8krn" Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.852213 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.855889 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.856585 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc"] Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.925104 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m559m\" (UniqueName: \"kubernetes.io/projected/f2f1b01a-1863-4f20-892e-a4ac0f808d71-kube-api-access-m559m\") pod \"obo-prometheus-operator-68bc856cb9-9gqzc\" (UID: \"f2f1b01a-1863-4f20-892e-a4ac0f808d71\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.936679 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk"] Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.937336 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:34 crc kubenswrapper[4684]: W0121 10:18:34.940299 4684 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-c86qx": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-c86qx" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Jan 21 10:18:34 crc kubenswrapper[4684]: E0121 10:18:34.940351 4684 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-c86qx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-dockercfg-c86qx\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:18:34 crc kubenswrapper[4684]: W0121 10:18:34.940463 4684 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Jan 21 10:18:34 crc kubenswrapper[4684]: E0121 10:18:34.940481 4684 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.991225 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7"] Jan 21 10:18:34 crc kubenswrapper[4684]: I0121 10:18:34.991973 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.006257 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.026152 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m559m\" (UniqueName: \"kubernetes.io/projected/f2f1b01a-1863-4f20-892e-a4ac0f808d71-kube-api-access-m559m\") pod \"obo-prometheus-operator-68bc856cb9-9gqzc\" (UID: \"f2f1b01a-1863-4f20-892e-a4ac0f808d71\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.026206 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2669523c-e1d2-4b87-9e3e-f1e526c6ede5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk\" (UID: \"2669523c-e1d2-4b87-9e3e-f1e526c6ede5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.026223 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/394f40ff-afe2-471b-ae2d-6ba7ac2be402-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7\" (UID: \"394f40ff-afe2-471b-ae2d-6ba7ac2be402\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.026262 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/394f40ff-afe2-471b-ae2d-6ba7ac2be402-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7\" (UID: \"394f40ff-afe2-471b-ae2d-6ba7ac2be402\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.026289 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2669523c-e1d2-4b87-9e3e-f1e526c6ede5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk\" (UID: \"2669523c-e1d2-4b87-9e3e-f1e526c6ede5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.028089 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.070778 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m559m\" (UniqueName: \"kubernetes.io/projected/f2f1b01a-1863-4f20-892e-a4ac0f808d71-kube-api-access-m559m\") pod \"obo-prometheus-operator-68bc856cb9-9gqzc\" (UID: \"f2f1b01a-1863-4f20-892e-a4ac0f808d71\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.112031 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-njtvc"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.112753 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.118167 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.118457 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-l2drh" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.127612 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2669523c-e1d2-4b87-9e3e-f1e526c6ede5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk\" (UID: \"2669523c-e1d2-4b87-9e3e-f1e526c6ede5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.127674 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2669523c-e1d2-4b87-9e3e-f1e526c6ede5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk\" (UID: \"2669523c-e1d2-4b87-9e3e-f1e526c6ede5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.127694 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/394f40ff-afe2-471b-ae2d-6ba7ac2be402-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7\" (UID: \"394f40ff-afe2-471b-ae2d-6ba7ac2be402\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.127755 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87nh\" (UniqueName: \"kubernetes.io/projected/319c1164-c464-40c9-a163-89cbc719fa56-kube-api-access-j87nh\") pod \"observability-operator-59bdc8b94-njtvc\" (UID: \"319c1164-c464-40c9-a163-89cbc719fa56\") " pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.127785 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/319c1164-c464-40c9-a163-89cbc719fa56-observability-operator-tls\") pod \"observability-operator-59bdc8b94-njtvc\" (UID: \"319c1164-c464-40c9-a163-89cbc719fa56\") " pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.127804 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/394f40ff-afe2-471b-ae2d-6ba7ac2be402-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7\" (UID: \"394f40ff-afe2-471b-ae2d-6ba7ac2be402\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.140403 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-njtvc"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.199821 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.229280 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87nh\" (UniqueName: \"kubernetes.io/projected/319c1164-c464-40c9-a163-89cbc719fa56-kube-api-access-j87nh\") pod \"observability-operator-59bdc8b94-njtvc\" (UID: \"319c1164-c464-40c9-a163-89cbc719fa56\") " pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.229335 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/319c1164-c464-40c9-a163-89cbc719fa56-observability-operator-tls\") pod \"observability-operator-59bdc8b94-njtvc\" (UID: \"319c1164-c464-40c9-a163-89cbc719fa56\") " pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.233120 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/319c1164-c464-40c9-a163-89cbc719fa56-observability-operator-tls\") pod \"observability-operator-59bdc8b94-njtvc\" (UID: \"319c1164-c464-40c9-a163-89cbc719fa56\") " pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.248410 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87nh\" (UniqueName: \"kubernetes.io/projected/319c1164-c464-40c9-a163-89cbc719fa56-kube-api-access-j87nh\") pod \"observability-operator-59bdc8b94-njtvc\" (UID: \"319c1164-c464-40c9-a163-89cbc719fa56\") " pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.295916 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tgqvm"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.296855 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.298727 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xf9s8" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.309745 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tgqvm"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.432418 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tgqvm\" (UID: \"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26\") " pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.432599 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6629\" (UniqueName: \"kubernetes.io/projected/7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26-kube-api-access-q6629\") pod \"perses-operator-5bf474d74f-tgqvm\" (UID: \"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26\") " pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.433855 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.538984 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6629\" (UniqueName: \"kubernetes.io/projected/7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26-kube-api-access-q6629\") pod \"perses-operator-5bf474d74f-tgqvm\" (UID: \"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26\") " pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.539109 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tgqvm\" (UID: \"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26\") " pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.540208 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tgqvm\" (UID: \"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26\") " pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.585806 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6629\" (UniqueName: \"kubernetes.io/projected/7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26-kube-api-access-q6629\") pod \"perses-operator-5bf474d74f-tgqvm\" (UID: \"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26\") " pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.636691 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.691664 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc"] Jan 21 10:18:35 crc kubenswrapper[4684]: W0121 10:18:35.703291 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2f1b01a_1863_4f20_892e_a4ac0f808d71.slice/crio-0d261bb0fd638df5b96308097e85dfe14d3d715dbc110b6a4965a9e8b48c161e WatchSource:0}: Error finding container 0d261bb0fd638df5b96308097e85dfe14d3d715dbc110b6a4965a9e8b48c161e: Status 404 returned error can't find the container with id 0d261bb0fd638df5b96308097e85dfe14d3d715dbc110b6a4965a9e8b48c161e Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.735777 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-njtvc"] Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.839530 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.856516 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2669523c-e1d2-4b87-9e3e-f1e526c6ede5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk\" (UID: \"2669523c-e1d2-4b87-9e3e-f1e526c6ede5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.857122 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/394f40ff-afe2-471b-ae2d-6ba7ac2be402-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7\" (UID: \"394f40ff-afe2-471b-ae2d-6ba7ac2be402\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.857857 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/394f40ff-afe2-471b-ae2d-6ba7ac2be402-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7\" (UID: \"394f40ff-afe2-471b-ae2d-6ba7ac2be402\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.869557 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2669523c-e1d2-4b87-9e3e-f1e526c6ede5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk\" (UID: \"2669523c-e1d2-4b87-9e3e-f1e526c6ede5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:35 crc kubenswrapper[4684]: I0121 10:18:35.928134 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tgqvm"] Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.116759 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-c86qx" Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.117711 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.151388 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.506060 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" event={"ID":"319c1164-c464-40c9-a163-89cbc719fa56","Type":"ContainerStarted","Data":"56cf74eab2c3e8488bee203ef2f25da8ceb8705fcc99c67ad2b4e5f3e997d030"} Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.511217 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" event={"ID":"f2f1b01a-1863-4f20-892e-a4ac0f808d71","Type":"ContainerStarted","Data":"0d261bb0fd638df5b96308097e85dfe14d3d715dbc110b6a4965a9e8b48c161e"} Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.515900 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" event={"ID":"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26","Type":"ContainerStarted","Data":"5d310620b37dc26628dfd8458ebd26b9f136a27a8ba1d828e0853c814b8e9dcd"} Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.551175 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerStarted","Data":"598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd"} Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.748760 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk"] Jan 21 10:18:36 crc kubenswrapper[4684]: I0121 10:18:36.814177 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7"] Jan 21 10:18:36 crc kubenswrapper[4684]: W0121 10:18:36.827087 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod394f40ff_afe2_471b_ae2d_6ba7ac2be402.slice/crio-a6a5ac4086ef25464cfcd67147a936efb5af10e88ac316b7d931a4fce693bccd WatchSource:0}: Error finding container a6a5ac4086ef25464cfcd67147a936efb5af10e88ac316b7d931a4fce693bccd: Status 404 returned error can't find the container with id a6a5ac4086ef25464cfcd67147a936efb5af10e88ac316b7d931a4fce693bccd Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.302902 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.303230 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.568138 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" event={"ID":"2669523c-e1d2-4b87-9e3e-f1e526c6ede5","Type":"ContainerStarted","Data":"a82f341b24dc6ee9e533926f368ad280558578e9713b826ac75d10f51a0b4ad2"} Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.569514 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" event={"ID":"394f40ff-afe2-471b-ae2d-6ba7ac2be402","Type":"ContainerStarted","Data":"a6a5ac4086ef25464cfcd67147a936efb5af10e88ac316b7d931a4fce693bccd"} Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.571067 4684 generic.go:334] "Generic (PLEG): container finished" podID="a3c57917-2d13-4511-80fb-31950b741141" containerID="598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd" exitCode=0 Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.571096 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerDied","Data":"598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd"} Jan 21 10:18:37 crc kubenswrapper[4684]: I0121 10:18:37.601213 4684 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.123803 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-bb4ffb7f7-gzmw4"] Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.124992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.136863 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.137143 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.138586 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.138838 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-jzmns" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.147429 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-bb4ffb7f7-gzmw4"] Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.310148 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-apiservice-cert\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.310211 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-webhook-cert\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.310312 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvp65\" (UniqueName: \"kubernetes.io/projected/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-kube-api-access-lvp65\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.411173 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvp65\" (UniqueName: \"kubernetes.io/projected/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-kube-api-access-lvp65\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.411279 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-apiservice-cert\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.411308 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-webhook-cert\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.419113 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-apiservice-cert\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.427998 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-webhook-cert\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.439835 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvp65\" (UniqueName: \"kubernetes.io/projected/9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d-kube-api-access-lvp65\") pod \"elastic-operator-bb4ffb7f7-gzmw4\" (UID: \"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d\") " pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:39 crc kubenswrapper[4684]: I0121 10:18:39.481785 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" Jan 21 10:18:47 crc kubenswrapper[4684]: I0121 10:18:47.631693 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-bb4ffb7f7-gzmw4"] Jan 21 10:18:47 crc kubenswrapper[4684]: W0121 10:18:47.657957 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a3ae8fc_094f_4ee5_8c57_c1ddbccfc06d.slice/crio-a638071b90b8fdf9096293c5135c73b2ff6517cdb0f1b740a9d68aa39b7d6e3b WatchSource:0}: Error finding container a638071b90b8fdf9096293c5135c73b2ff6517cdb0f1b740a9d68aa39b7d6e3b: Status 404 returned error can't find the container with id a638071b90b8fdf9096293c5135c73b2ff6517cdb0f1b740a9d68aa39b7d6e3b Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.640084 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" event={"ID":"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d","Type":"ContainerStarted","Data":"a638071b90b8fdf9096293c5135c73b2ff6517cdb0f1b740a9d68aa39b7d6e3b"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.657171 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" event={"ID":"394f40ff-afe2-471b-ae2d-6ba7ac2be402","Type":"ContainerStarted","Data":"fdf792aa0a51b1472568467e925644b3cc92790f6049130ff7bc6646cb2e51f3"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.665843 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerStarted","Data":"410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.676652 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7" podStartSLOduration=4.073479972 podStartE2EDuration="14.6766395s" podCreationTimestamp="2026-01-21 10:18:34 +0000 UTC" firstStartedPulling="2026-01-21 10:18:36.834340576 +0000 UTC m=+754.592423543" lastFinishedPulling="2026-01-21 10:18:47.437500104 +0000 UTC m=+765.195583071" observedRunningTime="2026-01-21 10:18:48.67564726 +0000 UTC m=+766.433730237" watchObservedRunningTime="2026-01-21 10:18:48.6766395 +0000 UTC m=+766.434722467" Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.676684 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerStarted","Data":"af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.679214 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" event={"ID":"7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26","Type":"ContainerStarted","Data":"b0aba3692787fe57cecc96a5ca520cdfdcb258f2c25c7569654129f1b4d9933b"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.679798 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.681153 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" event={"ID":"2669523c-e1d2-4b87-9e3e-f1e526c6ede5","Type":"ContainerStarted","Data":"d6cb26dd1bc78d32aba27a118d08d30651f96b52c0335ef9965318dd4dccf04a"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.684388 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerStarted","Data":"52b4e941cafa400c71dfb5596e0343a1716ca17cfc4aef215923f71b33d54f6d"} Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.729253 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" podStartSLOduration=2.340467137 podStartE2EDuration="13.729238542s" podCreationTimestamp="2026-01-21 10:18:35 +0000 UTC" firstStartedPulling="2026-01-21 10:18:35.936926562 +0000 UTC m=+753.695009529" lastFinishedPulling="2026-01-21 10:18:47.325697967 +0000 UTC m=+765.083780934" observedRunningTime="2026-01-21 10:18:48.727943342 +0000 UTC m=+766.486026319" watchObservedRunningTime="2026-01-21 10:18:48.729238542 +0000 UTC m=+766.487321509" Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.825966 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j88l9" podStartSLOduration=2.987698968 podStartE2EDuration="15.825942346s" podCreationTimestamp="2026-01-21 10:18:33 +0000 UTC" firstStartedPulling="2026-01-21 10:18:34.486240832 +0000 UTC m=+752.244323799" lastFinishedPulling="2026-01-21 10:18:47.32448421 +0000 UTC m=+765.082567177" observedRunningTime="2026-01-21 10:18:48.780702329 +0000 UTC m=+766.538785306" watchObservedRunningTime="2026-01-21 10:18:48.825942346 +0000 UTC m=+766.584025323" Jan 21 10:18:48 crc kubenswrapper[4684]: I0121 10:18:48.853501 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk" podStartSLOduration=4.299926732 podStartE2EDuration="14.8534847s" podCreationTimestamp="2026-01-21 10:18:34 +0000 UTC" firstStartedPulling="2026-01-21 10:18:36.774211263 +0000 UTC m=+754.532294230" lastFinishedPulling="2026-01-21 10:18:47.327769231 +0000 UTC m=+765.085852198" observedRunningTime="2026-01-21 10:18:48.849318142 +0000 UTC m=+766.607401109" watchObservedRunningTime="2026-01-21 10:18:48.8534847 +0000 UTC m=+766.611567667" Jan 21 10:18:49 crc kubenswrapper[4684]: I0121 10:18:49.703861 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerID="52b4e941cafa400c71dfb5596e0343a1716ca17cfc4aef215923f71b33d54f6d" exitCode=0 Jan 21 10:18:49 crc kubenswrapper[4684]: I0121 10:18:49.703930 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerDied","Data":"52b4e941cafa400c71dfb5596e0343a1716ca17cfc4aef215923f71b33d54f6d"} Jan 21 10:18:49 crc kubenswrapper[4684]: I0121 10:18:49.707562 4684 generic.go:334] "Generic (PLEG): container finished" podID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerID="410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98" exitCode=0 Jan 21 10:18:49 crc kubenswrapper[4684]: I0121 10:18:49.707909 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerDied","Data":"410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98"} Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.652958 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.653768 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.735615 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" event={"ID":"319c1164-c464-40c9-a163-89cbc719fa56","Type":"ContainerStarted","Data":"f62b4fc083f6c043191c80b5d335b5967de455ccad54cd48e4698eddc94994c0"} Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.736069 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.736954 4684 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-njtvc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.46:8081/healthz\": dial tcp 10.217.0.46:8081: connect: connection refused" start-of-body= Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.736997 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" podUID="319c1164-c464-40c9-a163-89cbc719fa56" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.46:8081/healthz\": dial tcp 10.217.0.46:8081: connect: connection refused" Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.738451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" event={"ID":"f2f1b01a-1863-4f20-892e-a4ac0f808d71","Type":"ContainerStarted","Data":"5c59c8bf94a00feaf363e960bc2f8af778a5f96b9996c39843a11cff790eb00c"} Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.742690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerStarted","Data":"ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76"} Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.744959 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerStarted","Data":"d9d36cafea0827e3b937954700de27c6412ecbf4861649fc680de3278c52d8ae"} Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.762326 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" podStartSLOduration=1.121813929 podStartE2EDuration="18.762310273s" podCreationTimestamp="2026-01-21 10:18:35 +0000 UTC" firstStartedPulling="2026-01-21 10:18:35.743494964 +0000 UTC m=+753.501577931" lastFinishedPulling="2026-01-21 10:18:53.383991298 +0000 UTC m=+771.142074275" observedRunningTime="2026-01-21 10:18:53.762120187 +0000 UTC m=+771.520203154" watchObservedRunningTime="2026-01-21 10:18:53.762310273 +0000 UTC m=+771.520393240" Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.828599 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" podStartSLOduration=9.066597383 podStartE2EDuration="25.828579653s" podCreationTimestamp="2026-01-21 10:18:28 +0000 UTC" firstStartedPulling="2026-01-21 10:18:30.455904734 +0000 UTC m=+748.213987691" lastFinishedPulling="2026-01-21 10:18:47.217886994 +0000 UTC m=+764.975969961" observedRunningTime="2026-01-21 10:18:53.8242281 +0000 UTC m=+771.582311067" watchObservedRunningTime="2026-01-21 10:18:53.828579653 +0000 UTC m=+771.586662620" Jan 21 10:18:53 crc kubenswrapper[4684]: I0121 10:18:53.829099 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9gqzc" podStartSLOduration=2.197286151 podStartE2EDuration="19.829091839s" podCreationTimestamp="2026-01-21 10:18:34 +0000 UTC" firstStartedPulling="2026-01-21 10:18:35.721041976 +0000 UTC m=+753.479124943" lastFinishedPulling="2026-01-21 10:18:53.352847664 +0000 UTC m=+771.110930631" observedRunningTime="2026-01-21 10:18:53.797204462 +0000 UTC m=+771.555287429" watchObservedRunningTime="2026-01-21 10:18:53.829091839 +0000 UTC m=+771.587174806" Jan 21 10:18:54 crc kubenswrapper[4684]: I0121 10:18:54.696436 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j88l9" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="registry-server" probeResult="failure" output=< Jan 21 10:18:54 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:18:54 crc kubenswrapper[4684]: > Jan 21 10:18:54 crc kubenswrapper[4684]: I0121 10:18:54.759989 4684 generic.go:334] "Generic (PLEG): container finished" podID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerID="d9d36cafea0827e3b937954700de27c6412ecbf4861649fc680de3278c52d8ae" exitCode=0 Jan 21 10:18:54 crc kubenswrapper[4684]: I0121 10:18:54.762357 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerDied","Data":"d9d36cafea0827e3b937954700de27c6412ecbf4861649fc680de3278c52d8ae"} Jan 21 10:18:54 crc kubenswrapper[4684]: I0121 10:18:54.769315 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-njtvc" Jan 21 10:18:54 crc kubenswrapper[4684]: I0121 10:18:54.790328 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zzwsr" podStartSLOduration=3.924330296 podStartE2EDuration="22.790309048s" podCreationTimestamp="2026-01-21 10:18:32 +0000 UTC" firstStartedPulling="2026-01-21 10:18:34.486289434 +0000 UTC m=+752.244372401" lastFinishedPulling="2026-01-21 10:18:53.352268186 +0000 UTC m=+771.110351153" observedRunningTime="2026-01-21 10:18:53.849535706 +0000 UTC m=+771.607618673" watchObservedRunningTime="2026-01-21 10:18:54.790309048 +0000 UTC m=+772.548392015" Jan 21 10:18:55 crc kubenswrapper[4684]: I0121 10:18:55.641310 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tgqvm" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.285879 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.453293 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-bundle\") pod \"1e79586b-d501-4fa0-9c2b-d612682f0c43\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.453393 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9pwc\" (UniqueName: \"kubernetes.io/projected/1e79586b-d501-4fa0-9c2b-d612682f0c43-kube-api-access-q9pwc\") pod \"1e79586b-d501-4fa0-9c2b-d612682f0c43\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.453482 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-util\") pod \"1e79586b-d501-4fa0-9c2b-d612682f0c43\" (UID: \"1e79586b-d501-4fa0-9c2b-d612682f0c43\") " Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.455214 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-bundle" (OuterVolumeSpecName: "bundle") pod "1e79586b-d501-4fa0-9c2b-d612682f0c43" (UID: "1e79586b-d501-4fa0-9c2b-d612682f0c43"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.467801 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e79586b-d501-4fa0-9c2b-d612682f0c43-kube-api-access-q9pwc" (OuterVolumeSpecName: "kube-api-access-q9pwc") pod "1e79586b-d501-4fa0-9c2b-d612682f0c43" (UID: "1e79586b-d501-4fa0-9c2b-d612682f0c43"). InnerVolumeSpecName "kube-api-access-q9pwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.468137 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-util" (OuterVolumeSpecName: "util") pod "1e79586b-d501-4fa0-9c2b-d612682f0c43" (UID: "1e79586b-d501-4fa0-9c2b-d612682f0c43"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.554720 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9pwc\" (UniqueName: \"kubernetes.io/projected/1e79586b-d501-4fa0-9c2b-d612682f0c43-kube-api-access-q9pwc\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.554759 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.554774 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1e79586b-d501-4fa0-9c2b-d612682f0c43-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.772529 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" event={"ID":"9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d","Type":"ContainerStarted","Data":"ff73f4c2ce5e8f56bc34c744701d6a63101c6d33eb71f0abc87809210b01d95f"} Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.775218 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" event={"ID":"1e79586b-d501-4fa0-9c2b-d612682f0c43","Type":"ContainerDied","Data":"018d757f25f773da8f7e5ec27b53132f65cfd99fc2eb505eb3f7d06ed69547b0"} Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.775249 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018d757f25f773da8f7e5ec27b53132f65cfd99fc2eb505eb3f7d06ed69547b0" Jan 21 10:18:56 crc kubenswrapper[4684]: I0121 10:18:56.775277 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.334323 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-bb4ffb7f7-gzmw4" podStartSLOduration=9.685572323 podStartE2EDuration="18.334299974s" podCreationTimestamp="2026-01-21 10:18:39 +0000 UTC" firstStartedPulling="2026-01-21 10:18:47.667325897 +0000 UTC m=+765.425408864" lastFinishedPulling="2026-01-21 10:18:56.316053548 +0000 UTC m=+774.074136515" observedRunningTime="2026-01-21 10:18:56.792635183 +0000 UTC m=+774.550718160" watchObservedRunningTime="2026-01-21 10:18:57.334299974 +0000 UTC m=+775.092382951" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.879537 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:18:57 crc kubenswrapper[4684]: E0121 10:18:57.879730 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="pull" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.879743 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="pull" Jan 21 10:18:57 crc kubenswrapper[4684]: E0121 10:18:57.879757 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="util" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.879762 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="util" Jan 21 10:18:57 crc kubenswrapper[4684]: E0121 10:18:57.879774 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="extract" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.879780 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="extract" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.879872 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e79586b-d501-4fa0-9c2b-d612682f0c43" containerName="extract" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.880646 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.882755 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.889495 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-x8pnv" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.898633 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.904747 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.904759 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.904760 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.905065 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.905218 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.905346 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 21 10:18:57 crc kubenswrapper[4684]: I0121 10:18:57.906335 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073397 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073458 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073501 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073582 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073671 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073693 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073728 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073811 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073835 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/03846613-371f-48c5-b48d-268666ac73fe-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073855 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073877 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.073904 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.074082 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.074144 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175405 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175487 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175535 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175563 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175598 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175642 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175679 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175707 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175734 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175760 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175784 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175815 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175843 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175938 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/03846613-371f-48c5-b48d-268666ac73fe-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.175964 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.176322 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.176887 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.176975 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.176352 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.177177 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.177180 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.177268 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.178055 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.181559 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/03846613-371f-48c5-b48d-268666ac73fe-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.199585 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.200253 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.200264 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.201051 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.201139 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.211291 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/03846613-371f-48c5-b48d-268666ac73fe-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"03846613-371f-48c5-b48d-268666ac73fe\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.496993 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.692851 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:18:58 crc kubenswrapper[4684]: W0121 10:18:58.698089 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03846613_371f_48c5_b48d_268666ac73fe.slice/crio-3ac35dbf7f9ec02205226d1fa10b768db368825a2d7631e3c2267f081ffabaa4 WatchSource:0}: Error finding container 3ac35dbf7f9ec02205226d1fa10b768db368825a2d7631e3c2267f081ffabaa4: Status 404 returned error can't find the container with id 3ac35dbf7f9ec02205226d1fa10b768db368825a2d7631e3c2267f081ffabaa4 Jan 21 10:18:58 crc kubenswrapper[4684]: I0121 10:18:58.788197 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"03846613-371f-48c5-b48d-268666ac73fe","Type":"ContainerStarted","Data":"3ac35dbf7f9ec02205226d1fa10b768db368825a2d7631e3c2267f081ffabaa4"} Jan 21 10:19:03 crc kubenswrapper[4684]: I0121 10:19:03.220716 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:19:03 crc kubenswrapper[4684]: I0121 10:19:03.222036 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:19:03 crc kubenswrapper[4684]: I0121 10:19:03.278860 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:19:03 crc kubenswrapper[4684]: I0121 10:19:03.725085 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:19:03 crc kubenswrapper[4684]: I0121 10:19:03.767088 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:19:03 crc kubenswrapper[4684]: I0121 10:19:03.877121 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:19:06 crc kubenswrapper[4684]: I0121 10:19:06.897060 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzwsr"] Jan 21 10:19:06 crc kubenswrapper[4684]: I0121 10:19:06.897595 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zzwsr" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="registry-server" containerID="cri-o://ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76" gracePeriod=2 Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.302875 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.303136 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.303181 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.303776 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be7cd103ac0b509678b75cd5e797eb3c7c476dcd51d6bf722a679b736d58aab7"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.303826 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://be7cd103ac0b509678b75cd5e797eb3c7c476dcd51d6bf722a679b736d58aab7" gracePeriod=600 Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.343934 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.495625 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc8gl\" (UniqueName: \"kubernetes.io/projected/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-kube-api-access-jc8gl\") pod \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.495740 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-utilities\") pod \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.495798 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-catalog-content\") pod \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\" (UID: \"0ae078e6-e1ed-4026-8fcd-b1bb935afe23\") " Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.497256 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-utilities" (OuterVolumeSpecName: "utilities") pod "0ae078e6-e1ed-4026-8fcd-b1bb935afe23" (UID: "0ae078e6-e1ed-4026-8fcd-b1bb935afe23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.504605 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-kube-api-access-jc8gl" (OuterVolumeSpecName: "kube-api-access-jc8gl") pod "0ae078e6-e1ed-4026-8fcd-b1bb935afe23" (UID: "0ae078e6-e1ed-4026-8fcd-b1bb935afe23"). InnerVolumeSpecName "kube-api-access-jc8gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.562854 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ae078e6-e1ed-4026-8fcd-b1bb935afe23" (UID: "0ae078e6-e1ed-4026-8fcd-b1bb935afe23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.597461 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.597513 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.597525 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc8gl\" (UniqueName: \"kubernetes.io/projected/0ae078e6-e1ed-4026-8fcd-b1bb935afe23-kube-api-access-jc8gl\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.856207 4684 generic.go:334] "Generic (PLEG): container finished" podID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerID="ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76" exitCode=0 Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.856270 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zzwsr" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.856272 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerDied","Data":"ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76"} Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.856375 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zzwsr" event={"ID":"0ae078e6-e1ed-4026-8fcd-b1bb935afe23","Type":"ContainerDied","Data":"59471cda137362146b0761f4a6413af56948e5e5598bafec52b6b0170f705501"} Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.856406 4684 scope.go:117] "RemoveContainer" containerID="ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.875482 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="be7cd103ac0b509678b75cd5e797eb3c7c476dcd51d6bf722a679b736d58aab7" exitCode=0 Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.875534 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"be7cd103ac0b509678b75cd5e797eb3c7c476dcd51d6bf722a679b736d58aab7"} Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.888081 4684 scope.go:117] "RemoveContainer" containerID="410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.901491 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zzwsr"] Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.905909 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zzwsr"] Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.909993 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j88l9"] Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.910253 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j88l9" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="registry-server" containerID="cri-o://af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f" gracePeriod=2 Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.921471 4684 scope.go:117] "RemoveContainer" containerID="9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.944250 4684 scope.go:117] "RemoveContainer" containerID="ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76" Jan 21 10:19:07 crc kubenswrapper[4684]: E0121 10:19:07.945154 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76\": container with ID starting with ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76 not found: ID does not exist" containerID="ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.945191 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76"} err="failed to get container status \"ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76\": rpc error: code = NotFound desc = could not find container \"ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76\": container with ID starting with ff5b610931bdca29ceebcde6a2d7aa0e82ac33f5bdb815fc18c13eef898aee76 not found: ID does not exist" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.945214 4684 scope.go:117] "RemoveContainer" containerID="410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98" Jan 21 10:19:07 crc kubenswrapper[4684]: E0121 10:19:07.946146 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98\": container with ID starting with 410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98 not found: ID does not exist" containerID="410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.946173 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98"} err="failed to get container status \"410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98\": rpc error: code = NotFound desc = could not find container \"410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98\": container with ID starting with 410abff2156b930bbdf39f2d616a99113ff3cd51106cf653008be09358190d98 not found: ID does not exist" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.946188 4684 scope.go:117] "RemoveContainer" containerID="9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14" Jan 21 10:19:07 crc kubenswrapper[4684]: E0121 10:19:07.947142 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14\": container with ID starting with 9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14 not found: ID does not exist" containerID="9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.947163 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14"} err="failed to get container status \"9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14\": rpc error: code = NotFound desc = could not find container \"9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14\": container with ID starting with 9fee32d5816a093c92f4d4b160ce8de3b52c230baa66ba4ddc3c3e150e4fed14 not found: ID does not exist" Jan 21 10:19:07 crc kubenswrapper[4684]: I0121 10:19:07.947176 4684 scope.go:117] "RemoveContainer" containerID="422c1ec33f05e6485754517d18dd0a73fb82adbe8ba5ea99f708cebf3062fd45" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.526460 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" path="/var/lib/kubelet/pods/0ae078e6-e1ed-4026-8fcd-b1bb935afe23/volumes" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.834710 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.884287 4684 generic.go:334] "Generic (PLEG): container finished" podID="a3c57917-2d13-4511-80fb-31950b741141" containerID="af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f" exitCode=0 Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.884350 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerDied","Data":"af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f"} Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.884393 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j88l9" event={"ID":"a3c57917-2d13-4511-80fb-31950b741141","Type":"ContainerDied","Data":"8a7c0a5d3f7e4437ef2a9ecd1e89cf10cfb71cec910417ae1d68a7a4f652bbb8"} Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.884409 4684 scope.go:117] "RemoveContainer" containerID="af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.884425 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j88l9" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.890547 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"6f4462918383bc467ad7f03eeec652df2cace8658f28e603ce544d6d7137b741"} Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.902783 4684 scope.go:117] "RemoveContainer" containerID="598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.913811 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-utilities\") pod \"a3c57917-2d13-4511-80fb-31950b741141\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.913865 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtgq9\" (UniqueName: \"kubernetes.io/projected/a3c57917-2d13-4511-80fb-31950b741141-kube-api-access-wtgq9\") pod \"a3c57917-2d13-4511-80fb-31950b741141\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.913894 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-catalog-content\") pod \"a3c57917-2d13-4511-80fb-31950b741141\" (UID: \"a3c57917-2d13-4511-80fb-31950b741141\") " Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.915767 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-utilities" (OuterVolumeSpecName: "utilities") pod "a3c57917-2d13-4511-80fb-31950b741141" (UID: "a3c57917-2d13-4511-80fb-31950b741141"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.920182 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c57917-2d13-4511-80fb-31950b741141-kube-api-access-wtgq9" (OuterVolumeSpecName: "kube-api-access-wtgq9") pod "a3c57917-2d13-4511-80fb-31950b741141" (UID: "a3c57917-2d13-4511-80fb-31950b741141"). InnerVolumeSpecName "kube-api-access-wtgq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.922012 4684 scope.go:117] "RemoveContainer" containerID="dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.937855 4684 scope.go:117] "RemoveContainer" containerID="af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f" Jan 21 10:19:08 crc kubenswrapper[4684]: E0121 10:19:08.938243 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f\": container with ID starting with af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f not found: ID does not exist" containerID="af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.938301 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f"} err="failed to get container status \"af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f\": rpc error: code = NotFound desc = could not find container \"af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f\": container with ID starting with af76284c1df63a66d1804ed925b5ed46e5a7c3f8f44211f86c0e85320eb1c58f not found: ID does not exist" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.938335 4684 scope.go:117] "RemoveContainer" containerID="598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd" Jan 21 10:19:08 crc kubenswrapper[4684]: E0121 10:19:08.938682 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd\": container with ID starting with 598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd not found: ID does not exist" containerID="598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.938715 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd"} err="failed to get container status \"598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd\": rpc error: code = NotFound desc = could not find container \"598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd\": container with ID starting with 598d7ba6ca4eecbcf69f1dc67bb63baebefa9037112aeca019068e0461a9aadd not found: ID does not exist" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.938736 4684 scope.go:117] "RemoveContainer" containerID="dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60" Jan 21 10:19:08 crc kubenswrapper[4684]: E0121 10:19:08.938996 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60\": container with ID starting with dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60 not found: ID does not exist" containerID="dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60" Jan 21 10:19:08 crc kubenswrapper[4684]: I0121 10:19:08.939039 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60"} err="failed to get container status \"dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60\": rpc error: code = NotFound desc = could not find container \"dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60\": container with ID starting with dc47dd1d66c532850b0faf5b18daf78c89f6551755971cb2a70c019af3e4ba60 not found: ID does not exist" Jan 21 10:19:09 crc kubenswrapper[4684]: I0121 10:19:09.021254 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:09 crc kubenswrapper[4684]: I0121 10:19:09.021297 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtgq9\" (UniqueName: \"kubernetes.io/projected/a3c57917-2d13-4511-80fb-31950b741141-kube-api-access-wtgq9\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:09 crc kubenswrapper[4684]: I0121 10:19:09.047602 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3c57917-2d13-4511-80fb-31950b741141" (UID: "a3c57917-2d13-4511-80fb-31950b741141"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:19:09 crc kubenswrapper[4684]: I0121 10:19:09.122656 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c57917-2d13-4511-80fb-31950b741141-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:09 crc kubenswrapper[4684]: I0121 10:19:09.217609 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j88l9"] Jan 21 10:19:09 crc kubenswrapper[4684]: I0121 10:19:09.221510 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j88l9"] Jan 21 10:19:10 crc kubenswrapper[4684]: I0121 10:19:10.520838 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c57917-2d13-4511-80fb-31950b741141" path="/var/lib/kubelet/pods/a3c57917-2d13-4511-80fb-31950b741141/volumes" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.352908 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk"] Jan 21 10:19:11 crc kubenswrapper[4684]: E0121 10:19:11.353253 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="registry-server" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353281 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="registry-server" Jan 21 10:19:11 crc kubenswrapper[4684]: E0121 10:19:11.353309 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="extract-content" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353321 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="extract-content" Jan 21 10:19:11 crc kubenswrapper[4684]: E0121 10:19:11.353342 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="extract-utilities" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353356 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="extract-utilities" Jan 21 10:19:11 crc kubenswrapper[4684]: E0121 10:19:11.353400 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="extract-utilities" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353411 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="extract-utilities" Jan 21 10:19:11 crc kubenswrapper[4684]: E0121 10:19:11.353431 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="registry-server" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353445 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="registry-server" Jan 21 10:19:11 crc kubenswrapper[4684]: E0121 10:19:11.353460 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="extract-content" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353474 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="extract-content" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353660 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c57917-2d13-4511-80fb-31950b741141" containerName="registry-server" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.353711 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae078e6-e1ed-4026-8fcd-b1bb935afe23" containerName="registry-server" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.355319 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.357322 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-pgpnk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.357774 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.358016 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.374537 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk"] Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.450785 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a18acbf7-08b4-4716-9e3a-3e6a036b6b19-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-r98bk\" (UID: \"a18acbf7-08b4-4716-9e3a-3e6a036b6b19\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.450881 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvdwm\" (UniqueName: \"kubernetes.io/projected/a18acbf7-08b4-4716-9e3a-3e6a036b6b19-kube-api-access-zvdwm\") pod \"cert-manager-operator-controller-manager-5446d6888b-r98bk\" (UID: \"a18acbf7-08b4-4716-9e3a-3e6a036b6b19\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.552488 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvdwm\" (UniqueName: \"kubernetes.io/projected/a18acbf7-08b4-4716-9e3a-3e6a036b6b19-kube-api-access-zvdwm\") pod \"cert-manager-operator-controller-manager-5446d6888b-r98bk\" (UID: \"a18acbf7-08b4-4716-9e3a-3e6a036b6b19\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.552562 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a18acbf7-08b4-4716-9e3a-3e6a036b6b19-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-r98bk\" (UID: \"a18acbf7-08b4-4716-9e3a-3e6a036b6b19\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.553007 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a18acbf7-08b4-4716-9e3a-3e6a036b6b19-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-r98bk\" (UID: \"a18acbf7-08b4-4716-9e3a-3e6a036b6b19\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.575525 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvdwm\" (UniqueName: \"kubernetes.io/projected/a18acbf7-08b4-4716-9e3a-3e6a036b6b19-kube-api-access-zvdwm\") pod \"cert-manager-operator-controller-manager-5446d6888b-r98bk\" (UID: \"a18acbf7-08b4-4716-9e3a-3e6a036b6b19\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.686064 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" Jan 21 10:19:11 crc kubenswrapper[4684]: I0121 10:19:11.907189 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk"] Jan 21 10:19:12 crc kubenswrapper[4684]: I0121 10:19:12.925337 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" event={"ID":"a18acbf7-08b4-4716-9e3a-3e6a036b6b19","Type":"ContainerStarted","Data":"7c7df8a6cdb2fe3c7708c79dc65981c158a10b498613a49aae1fe8f0d27eb9fe"} Jan 21 10:19:17 crc kubenswrapper[4684]: I0121 10:19:17.952462 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"03846613-371f-48c5-b48d-268666ac73fe","Type":"ContainerStarted","Data":"494071af30e8ba3d246c3863c413a695e709465058aa4895867c73ecb86cad0d"} Jan 21 10:19:18 crc kubenswrapper[4684]: I0121 10:19:18.136972 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:19:18 crc kubenswrapper[4684]: I0121 10:19:18.171903 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 10:19:19 crc kubenswrapper[4684]: I0121 10:19:19.965117 4684 generic.go:334] "Generic (PLEG): container finished" podID="03846613-371f-48c5-b48d-268666ac73fe" containerID="494071af30e8ba3d246c3863c413a695e709465058aa4895867c73ecb86cad0d" exitCode=0 Jan 21 10:19:19 crc kubenswrapper[4684]: I0121 10:19:19.965430 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"03846613-371f-48c5-b48d-268666ac73fe","Type":"ContainerDied","Data":"494071af30e8ba3d246c3863c413a695e709465058aa4895867c73ecb86cad0d"} Jan 21 10:19:21 crc kubenswrapper[4684]: I0121 10:19:21.977489 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" event={"ID":"a18acbf7-08b4-4716-9e3a-3e6a036b6b19","Type":"ContainerStarted","Data":"c60a885ca923250fb629f7d745a969e99642a03364ede4c7aa538e1ee1870d80"} Jan 21 10:19:21 crc kubenswrapper[4684]: I0121 10:19:21.980294 4684 generic.go:334] "Generic (PLEG): container finished" podID="03846613-371f-48c5-b48d-268666ac73fe" containerID="d1665a585d54850b64f24bdd4ced162920fccd198439d654f1faa9d4cd3d1148" exitCode=0 Jan 21 10:19:21 crc kubenswrapper[4684]: I0121 10:19:21.980434 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"03846613-371f-48c5-b48d-268666ac73fe","Type":"ContainerDied","Data":"d1665a585d54850b64f24bdd4ced162920fccd198439d654f1faa9d4cd3d1148"} Jan 21 10:19:22 crc kubenswrapper[4684]: I0121 10:19:22.000523 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-r98bk" podStartSLOduration=1.075328135 podStartE2EDuration="11.000500632s" podCreationTimestamp="2026-01-21 10:19:11 +0000 UTC" firstStartedPulling="2026-01-21 10:19:11.91765088 +0000 UTC m=+789.675733847" lastFinishedPulling="2026-01-21 10:19:21.842823377 +0000 UTC m=+799.600906344" observedRunningTime="2026-01-21 10:19:21.998546333 +0000 UTC m=+799.756629300" watchObservedRunningTime="2026-01-21 10:19:22.000500632 +0000 UTC m=+799.758583609" Jan 21 10:19:22 crc kubenswrapper[4684]: I0121 10:19:22.988570 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"03846613-371f-48c5-b48d-268666ac73fe","Type":"ContainerStarted","Data":"b5e306864ee223b4805a51dcaa60f2b7692924cafcdf216edf3c6f2364c61e44"} Jan 21 10:19:22 crc kubenswrapper[4684]: I0121 10:19:22.988958 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:19:23 crc kubenswrapper[4684]: I0121 10:19:23.043602 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=7.332057912 podStartE2EDuration="26.043578391s" podCreationTimestamp="2026-01-21 10:18:57 +0000 UTC" firstStartedPulling="2026-01-21 10:18:58.700887277 +0000 UTC m=+776.458970244" lastFinishedPulling="2026-01-21 10:19:17.412407766 +0000 UTC m=+795.170490723" observedRunningTime="2026-01-21 10:19:23.043282092 +0000 UTC m=+800.801365059" watchObservedRunningTime="2026-01-21 10:19:23.043578391 +0000 UTC m=+800.801661358" Jan 21 10:19:25 crc kubenswrapper[4684]: I0121 10:19:25.902966 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-drtkg"] Jan 21 10:19:25 crc kubenswrapper[4684]: I0121 10:19:25.904059 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:25 crc kubenswrapper[4684]: I0121 10:19:25.906973 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-9k9tj" Jan 21 10:19:25 crc kubenswrapper[4684]: I0121 10:19:25.909903 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 10:19:25 crc kubenswrapper[4684]: I0121 10:19:25.910237 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 10:19:25 crc kubenswrapper[4684]: I0121 10:19:25.918224 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-drtkg"] Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.082576 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4eca5441-46b0-4247-bf5a-b8981782867a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-drtkg\" (UID: \"4eca5441-46b0-4247-bf5a-b8981782867a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.082651 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jtfw\" (UniqueName: \"kubernetes.io/projected/4eca5441-46b0-4247-bf5a-b8981782867a-kube-api-access-8jtfw\") pod \"cert-manager-webhook-f4fb5df64-drtkg\" (UID: \"4eca5441-46b0-4247-bf5a-b8981782867a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.184455 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4eca5441-46b0-4247-bf5a-b8981782867a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-drtkg\" (UID: \"4eca5441-46b0-4247-bf5a-b8981782867a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.184552 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jtfw\" (UniqueName: \"kubernetes.io/projected/4eca5441-46b0-4247-bf5a-b8981782867a-kube-api-access-8jtfw\") pod \"cert-manager-webhook-f4fb5df64-drtkg\" (UID: \"4eca5441-46b0-4247-bf5a-b8981782867a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.208695 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jtfw\" (UniqueName: \"kubernetes.io/projected/4eca5441-46b0-4247-bf5a-b8981782867a-kube-api-access-8jtfw\") pod \"cert-manager-webhook-f4fb5df64-drtkg\" (UID: \"4eca5441-46b0-4247-bf5a-b8981782867a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.211054 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4eca5441-46b0-4247-bf5a-b8981782867a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-drtkg\" (UID: \"4eca5441-46b0-4247-bf5a-b8981782867a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.218895 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:26 crc kubenswrapper[4684]: I0121 10:19:26.428126 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-drtkg"] Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.009862 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" event={"ID":"4eca5441-46b0-4247-bf5a-b8981782867a","Type":"ContainerStarted","Data":"db61f390969c9a6670ea4f7289fb321299b99d52e6eb93954e2921865fd4dc9b"} Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.872183 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-q67bv"] Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.873379 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.875500 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jc46p" Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.886563 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-q67bv"] Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.906767 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhkr\" (UniqueName: \"kubernetes.io/projected/0ad30b20-d9a7-411a-bd08-af23f6cef22a-kube-api-access-wrhkr\") pod \"cert-manager-cainjector-855d9ccff4-q67bv\" (UID: \"0ad30b20-d9a7-411a-bd08-af23f6cef22a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:27 crc kubenswrapper[4684]: I0121 10:19:27.906835 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ad30b20-d9a7-411a-bd08-af23f6cef22a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-q67bv\" (UID: \"0ad30b20-d9a7-411a-bd08-af23f6cef22a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:28 crc kubenswrapper[4684]: I0121 10:19:28.008026 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrhkr\" (UniqueName: \"kubernetes.io/projected/0ad30b20-d9a7-411a-bd08-af23f6cef22a-kube-api-access-wrhkr\") pod \"cert-manager-cainjector-855d9ccff4-q67bv\" (UID: \"0ad30b20-d9a7-411a-bd08-af23f6cef22a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:28 crc kubenswrapper[4684]: I0121 10:19:28.008124 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ad30b20-d9a7-411a-bd08-af23f6cef22a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-q67bv\" (UID: \"0ad30b20-d9a7-411a-bd08-af23f6cef22a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:28 crc kubenswrapper[4684]: I0121 10:19:28.030979 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ad30b20-d9a7-411a-bd08-af23f6cef22a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-q67bv\" (UID: \"0ad30b20-d9a7-411a-bd08-af23f6cef22a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:28 crc kubenswrapper[4684]: I0121 10:19:28.031140 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrhkr\" (UniqueName: \"kubernetes.io/projected/0ad30b20-d9a7-411a-bd08-af23f6cef22a-kube-api-access-wrhkr\") pod \"cert-manager-cainjector-855d9ccff4-q67bv\" (UID: \"0ad30b20-d9a7-411a-bd08-af23f6cef22a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:28 crc kubenswrapper[4684]: I0121 10:19:28.192302 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" Jan 21 10:19:28 crc kubenswrapper[4684]: I0121 10:19:28.434013 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-q67bv"] Jan 21 10:19:28 crc kubenswrapper[4684]: W0121 10:19:28.438899 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ad30b20_d9a7_411a_bd08_af23f6cef22a.slice/crio-08c77041fff6633456a781849f9b4457c36a7ce1f4b176dc63c9879a4b62a497 WatchSource:0}: Error finding container 08c77041fff6633456a781849f9b4457c36a7ce1f4b176dc63c9879a4b62a497: Status 404 returned error can't find the container with id 08c77041fff6633456a781849f9b4457c36a7ce1f4b176dc63c9879a4b62a497 Jan 21 10:19:29 crc kubenswrapper[4684]: I0121 10:19:29.039602 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" event={"ID":"0ad30b20-d9a7-411a-bd08-af23f6cef22a","Type":"ContainerStarted","Data":"08c77041fff6633456a781849f9b4457c36a7ce1f4b176dc63c9879a4b62a497"} Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.174271 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-2zf6b"] Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.175332 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.177447 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-m9hkl" Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.192550 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2zf6b"] Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.238837 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5cq\" (UniqueName: \"kubernetes.io/projected/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554-kube-api-access-gt5cq\") pod \"infrawatch-operators-2zf6b\" (UID: \"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554\") " pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.340563 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5cq\" (UniqueName: \"kubernetes.io/projected/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554-kube-api-access-gt5cq\") pod \"infrawatch-operators-2zf6b\" (UID: \"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554\") " pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.361687 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5cq\" (UniqueName: \"kubernetes.io/projected/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554-kube-api-access-gt5cq\") pod \"infrawatch-operators-2zf6b\" (UID: \"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554\") " pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:30 crc kubenswrapper[4684]: I0121 10:19:30.501703 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:33 crc kubenswrapper[4684]: I0121 10:19:33.587333 4684 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="03846613-371f-48c5-b48d-268666ac73fe" containerName="elasticsearch" probeResult="failure" output=< Jan 21 10:19:33 crc kubenswrapper[4684]: {"timestamp": "2026-01-21T10:19:33+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 10:19:33 crc kubenswrapper[4684]: > Jan 21 10:19:34 crc kubenswrapper[4684]: I0121 10:19:34.767879 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2zf6b"] Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.251400 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2zf6b"] Jan 21 10:19:35 crc kubenswrapper[4684]: W0121 10:19:35.273128 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016c3d74_f8a7_45b1_bd84_1ec6e8bf7554.slice/crio-0a796fbfd52139b313fc0d2955831251bdb9d77646f05062f5d7726f6a002c47 WatchSource:0}: Error finding container 0a796fbfd52139b313fc0d2955831251bdb9d77646f05062f5d7726f6a002c47: Status 404 returned error can't find the container with id 0a796fbfd52139b313fc0d2955831251bdb9d77646f05062f5d7726f6a002c47 Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.571466 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-28z9j"] Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.572335 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.583453 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-28z9j"] Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.615726 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdkq5\" (UniqueName: \"kubernetes.io/projected/5dd27e48-6f78-4869-9700-39a70a006a4b-kube-api-access-zdkq5\") pod \"infrawatch-operators-28z9j\" (UID: \"5dd27e48-6f78-4869-9700-39a70a006a4b\") " pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.717817 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdkq5\" (UniqueName: \"kubernetes.io/projected/5dd27e48-6f78-4869-9700-39a70a006a4b-kube-api-access-zdkq5\") pod \"infrawatch-operators-28z9j\" (UID: \"5dd27e48-6f78-4869-9700-39a70a006a4b\") " pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.736706 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdkq5\" (UniqueName: \"kubernetes.io/projected/5dd27e48-6f78-4869-9700-39a70a006a4b-kube-api-access-zdkq5\") pod \"infrawatch-operators-28z9j\" (UID: \"5dd27e48-6f78-4869-9700-39a70a006a4b\") " pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:35 crc kubenswrapper[4684]: I0121 10:19:35.896863 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.126208 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" event={"ID":"4eca5441-46b0-4247-bf5a-b8981782867a","Type":"ContainerStarted","Data":"3757ff097087e4a751dfca462a9726b0ca0506c298f608f3c70727d9265b8f7d"} Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.126654 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.129542 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" event={"ID":"0ad30b20-d9a7-411a-bd08-af23f6cef22a","Type":"ContainerStarted","Data":"fe99dab1869e9b00212679e4a5740bd2de1556481cec7f6db024d0496f038984"} Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.131092 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2zf6b" event={"ID":"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554","Type":"ContainerStarted","Data":"0a796fbfd52139b313fc0d2955831251bdb9d77646f05062f5d7726f6a002c47"} Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.141040 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" podStartSLOduration=2.49135801 podStartE2EDuration="11.141024292s" podCreationTimestamp="2026-01-21 10:19:25 +0000 UTC" firstStartedPulling="2026-01-21 10:19:26.442138298 +0000 UTC m=+804.200221265" lastFinishedPulling="2026-01-21 10:19:35.09180458 +0000 UTC m=+812.849887547" observedRunningTime="2026-01-21 10:19:36.137882608 +0000 UTC m=+813.895965565" watchObservedRunningTime="2026-01-21 10:19:36.141024292 +0000 UTC m=+813.899107259" Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.167072 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-q67bv" podStartSLOduration=2.511549563 podStartE2EDuration="9.167042773s" podCreationTimestamp="2026-01-21 10:19:27 +0000 UTC" firstStartedPulling="2026-01-21 10:19:28.441593508 +0000 UTC m=+806.199676475" lastFinishedPulling="2026-01-21 10:19:35.097086718 +0000 UTC m=+812.855169685" observedRunningTime="2026-01-21 10:19:36.163832556 +0000 UTC m=+813.921915523" watchObservedRunningTime="2026-01-21 10:19:36.167042773 +0000 UTC m=+813.925125740" Jan 21 10:19:36 crc kubenswrapper[4684]: I0121 10:19:36.371570 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-28z9j"] Jan 21 10:19:37 crc kubenswrapper[4684]: I0121 10:19:37.136695 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-28z9j" event={"ID":"5dd27e48-6f78-4869-9700-39a70a006a4b","Type":"ContainerStarted","Data":"1f07a91ae8ea410ee2932782ad5876deb83c26c31f0422cbb0315b336d3864e2"} Jan 21 10:19:38 crc kubenswrapper[4684]: I0121 10:19:38.804047 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.151872 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2zf6b" event={"ID":"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554","Type":"ContainerStarted","Data":"33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9"} Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.152141 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-2zf6b" podUID="016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" containerName="registry-server" containerID="cri-o://33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9" gracePeriod=2 Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.155059 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-28z9j" event={"ID":"5dd27e48-6f78-4869-9700-39a70a006a4b","Type":"ContainerStarted","Data":"a0d95d827ca1ec0b23360113639ddb4d45eaaa5b7de5fa8ecf44270cbcf93cbf"} Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.195439 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-2zf6b" podStartSLOduration=6.369207211 podStartE2EDuration="9.195417656s" podCreationTimestamp="2026-01-21 10:19:30 +0000 UTC" firstStartedPulling="2026-01-21 10:19:35.277729662 +0000 UTC m=+813.035812629" lastFinishedPulling="2026-01-21 10:19:38.103940107 +0000 UTC m=+815.862023074" observedRunningTime="2026-01-21 10:19:39.181107077 +0000 UTC m=+816.939190054" watchObservedRunningTime="2026-01-21 10:19:39.195417656 +0000 UTC m=+816.953500623" Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.197947 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-28z9j" podStartSLOduration=2.910802369 podStartE2EDuration="4.197940942s" podCreationTimestamp="2026-01-21 10:19:35 +0000 UTC" firstStartedPulling="2026-01-21 10:19:36.838492644 +0000 UTC m=+814.596575601" lastFinishedPulling="2026-01-21 10:19:38.125631217 +0000 UTC m=+815.883714174" observedRunningTime="2026-01-21 10:19:39.193806538 +0000 UTC m=+816.951889525" watchObservedRunningTime="2026-01-21 10:19:39.197940942 +0000 UTC m=+816.956023899" Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.552960 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.673397 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt5cq\" (UniqueName: \"kubernetes.io/projected/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554-kube-api-access-gt5cq\") pod \"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554\" (UID: \"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554\") " Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.681292 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554-kube-api-access-gt5cq" (OuterVolumeSpecName: "kube-api-access-gt5cq") pod "016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" (UID: "016c3d74-f8a7-45b1-bd84-1ec6e8bf7554"). InnerVolumeSpecName "kube-api-access-gt5cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:19:39 crc kubenswrapper[4684]: I0121 10:19:39.775117 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt5cq\" (UniqueName: \"kubernetes.io/projected/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554-kube-api-access-gt5cq\") on node \"crc\" DevicePath \"\"" Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.163430 4684 generic.go:334] "Generic (PLEG): container finished" podID="016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" containerID="33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9" exitCode=0 Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.163501 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2zf6b" event={"ID":"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554","Type":"ContainerDied","Data":"33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9"} Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.163828 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2zf6b" event={"ID":"016c3d74-f8a7-45b1-bd84-1ec6e8bf7554","Type":"ContainerDied","Data":"0a796fbfd52139b313fc0d2955831251bdb9d77646f05062f5d7726f6a002c47"} Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.163863 4684 scope.go:117] "RemoveContainer" containerID="33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9" Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.163512 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2zf6b" Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.187219 4684 scope.go:117] "RemoveContainer" containerID="33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9" Jan 21 10:19:40 crc kubenswrapper[4684]: E0121 10:19:40.187724 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9\": container with ID starting with 33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9 not found: ID does not exist" containerID="33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9" Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.187758 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9"} err="failed to get container status \"33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9\": rpc error: code = NotFound desc = could not find container \"33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9\": container with ID starting with 33a5fe4df5726b5824b9906ad857df0eac532e1ffd856875f49da216cb8ef8f9 not found: ID does not exist" Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.193640 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2zf6b"] Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.197739 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-2zf6b"] Jan 21 10:19:40 crc kubenswrapper[4684]: I0121 10:19:40.523875 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" path="/var/lib/kubelet/pods/016c3d74-f8a7-45b1-bd84-1ec6e8bf7554/volumes" Jan 21 10:19:41 crc kubenswrapper[4684]: I0121 10:19:41.223984 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-drtkg" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.897717 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.898221 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.932409 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.983009 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-5m7tk"] Jan 21 10:19:45 crc kubenswrapper[4684]: E0121 10:19:45.983590 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" containerName="registry-server" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.983612 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" containerName="registry-server" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.983746 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="016c3d74-f8a7-45b1-bd84-1ec6e8bf7554" containerName="registry-server" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.984210 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.987403 4684 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-c5m79" Jan 21 10:19:45 crc kubenswrapper[4684]: I0121 10:19:45.995052 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-5m7tk"] Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.165945 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxh7n\" (UniqueName: \"kubernetes.io/projected/b06f9741-edea-445c-91e9-74f4a0c414b8-kube-api-access-lxh7n\") pod \"cert-manager-86cb77c54b-5m7tk\" (UID: \"b06f9741-edea-445c-91e9-74f4a0c414b8\") " pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.166083 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06f9741-edea-445c-91e9-74f4a0c414b8-bound-sa-token\") pod \"cert-manager-86cb77c54b-5m7tk\" (UID: \"b06f9741-edea-445c-91e9-74f4a0c414b8\") " pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.233772 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-28z9j" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.267542 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxh7n\" (UniqueName: \"kubernetes.io/projected/b06f9741-edea-445c-91e9-74f4a0c414b8-kube-api-access-lxh7n\") pod \"cert-manager-86cb77c54b-5m7tk\" (UID: \"b06f9741-edea-445c-91e9-74f4a0c414b8\") " pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.267614 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06f9741-edea-445c-91e9-74f4a0c414b8-bound-sa-token\") pod \"cert-manager-86cb77c54b-5m7tk\" (UID: \"b06f9741-edea-445c-91e9-74f4a0c414b8\") " pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.287033 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06f9741-edea-445c-91e9-74f4a0c414b8-bound-sa-token\") pod \"cert-manager-86cb77c54b-5m7tk\" (UID: \"b06f9741-edea-445c-91e9-74f4a0c414b8\") " pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.294180 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxh7n\" (UniqueName: \"kubernetes.io/projected/b06f9741-edea-445c-91e9-74f4a0c414b8-kube-api-access-lxh7n\") pod \"cert-manager-86cb77c54b-5m7tk\" (UID: \"b06f9741-edea-445c-91e9-74f4a0c414b8\") " pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.313034 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-5m7tk" Jan 21 10:19:46 crc kubenswrapper[4684]: I0121 10:19:46.719238 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-5m7tk"] Jan 21 10:19:47 crc kubenswrapper[4684]: I0121 10:19:47.216585 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-5m7tk" event={"ID":"b06f9741-edea-445c-91e9-74f4a0c414b8","Type":"ContainerStarted","Data":"30a73687f623d459971b21f1562f60194e1fd35519ddc2789b835dc2073d6aee"} Jan 21 10:19:49 crc kubenswrapper[4684]: I0121 10:19:49.227075 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-5m7tk" event={"ID":"b06f9741-edea-445c-91e9-74f4a0c414b8","Type":"ContainerStarted","Data":"a4ca4422daf65c87c9192563e41785d3c5a015c63a80f456990000d74efda508"} Jan 21 10:19:49 crc kubenswrapper[4684]: I0121 10:19:49.246388 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-5m7tk" podStartSLOduration=4.246350151 podStartE2EDuration="4.246350151s" podCreationTimestamp="2026-01-21 10:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:19:49.242499445 +0000 UTC m=+827.000582422" watchObservedRunningTime="2026-01-21 10:19:49.246350151 +0000 UTC m=+827.004433118" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.425512 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm"] Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.427353 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.438490 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm"] Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.524838 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7zg7\" (UniqueName: \"kubernetes.io/projected/9c60c1de-970b-47e8-8a22-802ae60cd8ba-kube-api-access-d7zg7\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.524903 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-bundle\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.524976 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-util\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.626608 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-util\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.626943 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7zg7\" (UniqueName: \"kubernetes.io/projected/9c60c1de-970b-47e8-8a22-802ae60cd8ba-kube-api-access-d7zg7\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.627009 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-bundle\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.627227 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-util\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.627577 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-bundle\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.645737 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7zg7\" (UniqueName: \"kubernetes.io/projected/9c60c1de-970b-47e8-8a22-802ae60cd8ba-kube-api-access-d7zg7\") pod \"e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:50 crc kubenswrapper[4684]: I0121 10:19:50.756907 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.189332 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm"] Jan 21 10:19:51 crc kubenswrapper[4684]: W0121 10:19:51.198326 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c60c1de_970b_47e8_8a22_802ae60cd8ba.slice/crio-f74ecc131b51a6d105bbc7b4484daea9e4d272bf57637214607be77469aaff6a WatchSource:0}: Error finding container f74ecc131b51a6d105bbc7b4484daea9e4d272bf57637214607be77469aaff6a: Status 404 returned error can't find the container with id f74ecc131b51a6d105bbc7b4484daea9e4d272bf57637214607be77469aaff6a Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.222259 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br"] Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.224212 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.227228 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.248353 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br"] Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.249225 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" event={"ID":"9c60c1de-970b-47e8-8a22-802ae60cd8ba","Type":"ContainerStarted","Data":"f74ecc131b51a6d105bbc7b4484daea9e4d272bf57637214607be77469aaff6a"} Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.339141 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.339181 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.339339 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89sw6\" (UniqueName: \"kubernetes.io/projected/7907315a-0307-4509-97b4-160bf055fac8-kube-api-access-89sw6\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.440430 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.440484 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.440544 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89sw6\" (UniqueName: \"kubernetes.io/projected/7907315a-0307-4509-97b4-160bf055fac8-kube-api-access-89sw6\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.441007 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.441156 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.458323 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89sw6\" (UniqueName: \"kubernetes.io/projected/7907315a-0307-4509-97b4-160bf055fac8-kube-api-access-89sw6\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.546200 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:19:51 crc kubenswrapper[4684]: I0121 10:19:51.743845 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br"] Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.221215 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp"] Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.222818 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.256789 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" event={"ID":"7907315a-0307-4509-97b4-160bf055fac8","Type":"ContainerStarted","Data":"a3e4e1402aa6d9f89998d21a6257779a2cd76e22bc45f95ce8ec3b00f5cab899"} Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.286007 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp"] Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.351214 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-bundle\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.351294 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j9f2\" (UniqueName: \"kubernetes.io/projected/2c66d4da-7bde-43d6-af8d-957368c8ce4f-kube-api-access-8j9f2\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.351325 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-util\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.453311 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-bundle\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.453415 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j9f2\" (UniqueName: \"kubernetes.io/projected/2c66d4da-7bde-43d6-af8d-957368c8ce4f-kube-api-access-8j9f2\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.453475 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-util\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.454269 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-util\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.454271 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-bundle\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.474822 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j9f2\" (UniqueName: \"kubernetes.io/projected/2c66d4da-7bde-43d6-af8d-957368c8ce4f-kube-api-access-8j9f2\") pod \"2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.550239 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:19:52 crc kubenswrapper[4684]: I0121 10:19:52.828108 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp"] Jan 21 10:19:53 crc kubenswrapper[4684]: I0121 10:19:53.268451 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" event={"ID":"2c66d4da-7bde-43d6-af8d-957368c8ce4f","Type":"ContainerStarted","Data":"6459d9610ed88cb5d72b5c9c8d5fd639ff3eaecd59591c32e495179a3e35c413"} Jan 21 10:19:57 crc kubenswrapper[4684]: I0121 10:19:57.301863 4684 generic.go:334] "Generic (PLEG): container finished" podID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerID="742d4e6cc19b67698a66bf4f9345c369f20978d3effefd597261ce070bc8a18b" exitCode=0 Jan 21 10:19:57 crc kubenswrapper[4684]: I0121 10:19:57.301976 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" event={"ID":"2c66d4da-7bde-43d6-af8d-957368c8ce4f","Type":"ContainerDied","Data":"742d4e6cc19b67698a66bf4f9345c369f20978d3effefd597261ce070bc8a18b"} Jan 21 10:19:57 crc kubenswrapper[4684]: I0121 10:19:57.305884 4684 generic.go:334] "Generic (PLEG): container finished" podID="7907315a-0307-4509-97b4-160bf055fac8" containerID="4d1b6e602b2f74bc0dcc98cf6add27e321e11e36dfb353eb0750c28a70083379" exitCode=0 Jan 21 10:19:57 crc kubenswrapper[4684]: I0121 10:19:57.305964 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" event={"ID":"7907315a-0307-4509-97b4-160bf055fac8","Type":"ContainerDied","Data":"4d1b6e602b2f74bc0dcc98cf6add27e321e11e36dfb353eb0750c28a70083379"} Jan 21 10:19:57 crc kubenswrapper[4684]: I0121 10:19:57.308880 4684 generic.go:334] "Generic (PLEG): container finished" podID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerID="2119cad37f3866977e83791f86b5068e5de492136af402e1180995c5c15771e4" exitCode=0 Jan 21 10:19:57 crc kubenswrapper[4684]: I0121 10:19:57.309334 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" event={"ID":"9c60c1de-970b-47e8-8a22-802ae60cd8ba","Type":"ContainerDied","Data":"2119cad37f3866977e83791f86b5068e5de492136af402e1180995c5c15771e4"} Jan 21 10:19:59 crc kubenswrapper[4684]: I0121 10:19:59.324355 4684 generic.go:334] "Generic (PLEG): container finished" podID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerID="3c299d33aa58f2e61313fa112cc0a57d87232e2d5947c0c304e79fc72759946f" exitCode=0 Jan 21 10:19:59 crc kubenswrapper[4684]: I0121 10:19:59.324474 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" event={"ID":"9c60c1de-970b-47e8-8a22-802ae60cd8ba","Type":"ContainerDied","Data":"3c299d33aa58f2e61313fa112cc0a57d87232e2d5947c0c304e79fc72759946f"} Jan 21 10:19:59 crc kubenswrapper[4684]: I0121 10:19:59.331085 4684 generic.go:334] "Generic (PLEG): container finished" podID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerID="fd3b56313d94c28d2e0d0af2409c376687e6d490daf3fe1ed9fdf25f28ec4047" exitCode=0 Jan 21 10:19:59 crc kubenswrapper[4684]: I0121 10:19:59.331210 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" event={"ID":"2c66d4da-7bde-43d6-af8d-957368c8ce4f","Type":"ContainerDied","Data":"fd3b56313d94c28d2e0d0af2409c376687e6d490daf3fe1ed9fdf25f28ec4047"} Jan 21 10:19:59 crc kubenswrapper[4684]: I0121 10:19:59.335566 4684 generic.go:334] "Generic (PLEG): container finished" podID="7907315a-0307-4509-97b4-160bf055fac8" containerID="085e1838a7f37c675222dfe601ee00862a41a2edc72d9d8d18f4484cc144e4f3" exitCode=0 Jan 21 10:19:59 crc kubenswrapper[4684]: I0121 10:19:59.335610 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" event={"ID":"7907315a-0307-4509-97b4-160bf055fac8","Type":"ContainerDied","Data":"085e1838a7f37c675222dfe601ee00862a41a2edc72d9d8d18f4484cc144e4f3"} Jan 21 10:20:00 crc kubenswrapper[4684]: I0121 10:20:00.348398 4684 generic.go:334] "Generic (PLEG): container finished" podID="7907315a-0307-4509-97b4-160bf055fac8" containerID="896b41391dc8a341802245e7d334298c0a771103a3beb4950cec9abb5b1b6169" exitCode=0 Jan 21 10:20:00 crc kubenswrapper[4684]: I0121 10:20:00.348467 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" event={"ID":"7907315a-0307-4509-97b4-160bf055fac8","Type":"ContainerDied","Data":"896b41391dc8a341802245e7d334298c0a771103a3beb4950cec9abb5b1b6169"} Jan 21 10:20:00 crc kubenswrapper[4684]: I0121 10:20:00.352605 4684 generic.go:334] "Generic (PLEG): container finished" podID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerID="7b241b70a589cbfb0748960bfb7afefb535bd6e26e248a48f113027f5a8ad8dc" exitCode=0 Jan 21 10:20:00 crc kubenswrapper[4684]: I0121 10:20:00.352696 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" event={"ID":"9c60c1de-970b-47e8-8a22-802ae60cd8ba","Type":"ContainerDied","Data":"7b241b70a589cbfb0748960bfb7afefb535bd6e26e248a48f113027f5a8ad8dc"} Jan 21 10:20:00 crc kubenswrapper[4684]: I0121 10:20:00.355820 4684 generic.go:334] "Generic (PLEG): container finished" podID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerID="74503a2f058dbbcff160b9dc6f66852616289703b2ed517c842e56f959c39ddf" exitCode=0 Jan 21 10:20:00 crc kubenswrapper[4684]: I0121 10:20:00.355862 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" event={"ID":"2c66d4da-7bde-43d6-af8d-957368c8ce4f","Type":"ContainerDied","Data":"74503a2f058dbbcff160b9dc6f66852616289703b2ed517c842e56f959c39ddf"} Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.688840 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.694790 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.697960 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.790736 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-util\") pod \"7907315a-0307-4509-97b4-160bf055fac8\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.790823 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-util\") pod \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.790858 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-bundle\") pod \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.790898 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-util\") pod \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.790923 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89sw6\" (UniqueName: \"kubernetes.io/projected/7907315a-0307-4509-97b4-160bf055fac8-kube-api-access-89sw6\") pod \"7907315a-0307-4509-97b4-160bf055fac8\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.790968 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-bundle\") pod \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.791002 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7zg7\" (UniqueName: \"kubernetes.io/projected/9c60c1de-970b-47e8-8a22-802ae60cd8ba-kube-api-access-d7zg7\") pod \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\" (UID: \"9c60c1de-970b-47e8-8a22-802ae60cd8ba\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.791025 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-bundle\") pod \"7907315a-0307-4509-97b4-160bf055fac8\" (UID: \"7907315a-0307-4509-97b4-160bf055fac8\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.791079 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j9f2\" (UniqueName: \"kubernetes.io/projected/2c66d4da-7bde-43d6-af8d-957368c8ce4f-kube-api-access-8j9f2\") pod \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\" (UID: \"2c66d4da-7bde-43d6-af8d-957368c8ce4f\") " Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.791865 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-bundle" (OuterVolumeSpecName: "bundle") pod "9c60c1de-970b-47e8-8a22-802ae60cd8ba" (UID: "9c60c1de-970b-47e8-8a22-802ae60cd8ba"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.792238 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-bundle" (OuterVolumeSpecName: "bundle") pod "2c66d4da-7bde-43d6-af8d-957368c8ce4f" (UID: "2c66d4da-7bde-43d6-af8d-957368c8ce4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.792344 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-bundle" (OuterVolumeSpecName: "bundle") pod "7907315a-0307-4509-97b4-160bf055fac8" (UID: "7907315a-0307-4509-97b4-160bf055fac8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.796146 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c66d4da-7bde-43d6-af8d-957368c8ce4f-kube-api-access-8j9f2" (OuterVolumeSpecName: "kube-api-access-8j9f2") pod "2c66d4da-7bde-43d6-af8d-957368c8ce4f" (UID: "2c66d4da-7bde-43d6-af8d-957368c8ce4f"). InnerVolumeSpecName "kube-api-access-8j9f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.796840 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c60c1de-970b-47e8-8a22-802ae60cd8ba-kube-api-access-d7zg7" (OuterVolumeSpecName: "kube-api-access-d7zg7") pod "9c60c1de-970b-47e8-8a22-802ae60cd8ba" (UID: "9c60c1de-970b-47e8-8a22-802ae60cd8ba"). InnerVolumeSpecName "kube-api-access-d7zg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.797582 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7907315a-0307-4509-97b4-160bf055fac8-kube-api-access-89sw6" (OuterVolumeSpecName: "kube-api-access-89sw6") pod "7907315a-0307-4509-97b4-160bf055fac8" (UID: "7907315a-0307-4509-97b4-160bf055fac8"). InnerVolumeSpecName "kube-api-access-89sw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.816451 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-util" (OuterVolumeSpecName: "util") pod "2c66d4da-7bde-43d6-af8d-957368c8ce4f" (UID: "2c66d4da-7bde-43d6-af8d-957368c8ce4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.816468 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-util" (OuterVolumeSpecName: "util") pod "9c60c1de-970b-47e8-8a22-802ae60cd8ba" (UID: "9c60c1de-970b-47e8-8a22-802ae60cd8ba"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.818746 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-util" (OuterVolumeSpecName: "util") pod "7907315a-0307-4509-97b4-160bf055fac8" (UID: "7907315a-0307-4509-97b4-160bf055fac8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892529 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892566 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892578 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892590 4684 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c66d4da-7bde-43d6-af8d-957368c8ce4f-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892602 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89sw6\" (UniqueName: \"kubernetes.io/projected/7907315a-0307-4509-97b4-160bf055fac8-kube-api-access-89sw6\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892642 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9c60c1de-970b-47e8-8a22-802ae60cd8ba-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892654 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7zg7\" (UniqueName: \"kubernetes.io/projected/9c60c1de-970b-47e8-8a22-802ae60cd8ba-kube-api-access-d7zg7\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892666 4684 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7907315a-0307-4509-97b4-160bf055fac8-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:01 crc kubenswrapper[4684]: I0121 10:20:01.892677 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j9f2\" (UniqueName: \"kubernetes.io/projected/2c66d4da-7bde-43d6-af8d-957368c8ce4f-kube-api-access-8j9f2\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.374209 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" event={"ID":"9c60c1de-970b-47e8-8a22-802ae60cd8ba","Type":"ContainerDied","Data":"f74ecc131b51a6d105bbc7b4484daea9e4d272bf57637214607be77469aaff6a"} Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.374271 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f74ecc131b51a6d105bbc7b4484daea9e4d272bf57637214607be77469aaff6a" Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.374435 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm" Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.377821 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.377813 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp" event={"ID":"2c66d4da-7bde-43d6-af8d-957368c8ce4f","Type":"ContainerDied","Data":"6459d9610ed88cb5d72b5c9c8d5fd639ff3eaecd59591c32e495179a3e35c413"} Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.377982 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6459d9610ed88cb5d72b5c9c8d5fd639ff3eaecd59591c32e495179a3e35c413" Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.381270 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" event={"ID":"7907315a-0307-4509-97b4-160bf055fac8","Type":"ContainerDied","Data":"a3e4e1402aa6d9f89998d21a6257779a2cd76e22bc45f95ce8ec3b00f5cab899"} Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.381326 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e4e1402aa6d9f89998d21a6257779a2cd76e22bc45f95ce8ec3b00f5cab899" Jan 21 10:20:02 crc kubenswrapper[4684]: I0121 10:20:02.381439 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.182683 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-68688768b9-92w7t"] Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183558 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="pull" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183576 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="pull" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183587 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="util" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183599 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="util" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183611 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183618 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183631 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="pull" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183637 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="pull" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183649 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="pull" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183658 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="pull" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183666 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183673 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183684 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="util" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183691 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="util" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183704 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="util" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183711 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="util" Jan 21 10:20:07 crc kubenswrapper[4684]: E0121 10:20:07.183724 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183734 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183845 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c60c1de-970b-47e8-8a22-802ae60cd8ba" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183857 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c66d4da-7bde-43d6-af8d-957368c8ce4f" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.183872 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7907315a-0307-4509-97b4-160bf055fac8" containerName="extract" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.184356 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.186291 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-7hnv2" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.216982 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-68688768b9-92w7t"] Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.272296 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3cc1ef89-ee68-427f-bd4e-77d27c77f8c4-runner\") pod \"service-telemetry-operator-68688768b9-92w7t\" (UID: \"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4\") " pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.272644 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89gp\" (UniqueName: \"kubernetes.io/projected/3cc1ef89-ee68-427f-bd4e-77d27c77f8c4-kube-api-access-t89gp\") pod \"service-telemetry-operator-68688768b9-92w7t\" (UID: \"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4\") " pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.374124 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89gp\" (UniqueName: \"kubernetes.io/projected/3cc1ef89-ee68-427f-bd4e-77d27c77f8c4-kube-api-access-t89gp\") pod \"service-telemetry-operator-68688768b9-92w7t\" (UID: \"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4\") " pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.374435 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3cc1ef89-ee68-427f-bd4e-77d27c77f8c4-runner\") pod \"service-telemetry-operator-68688768b9-92w7t\" (UID: \"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4\") " pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.374885 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3cc1ef89-ee68-427f-bd4e-77d27c77f8c4-runner\") pod \"service-telemetry-operator-68688768b9-92w7t\" (UID: \"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4\") " pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.391795 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89gp\" (UniqueName: \"kubernetes.io/projected/3cc1ef89-ee68-427f-bd4e-77d27c77f8c4-kube-api-access-t89gp\") pod \"service-telemetry-operator-68688768b9-92w7t\" (UID: \"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4\") " pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.517664 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" Jan 21 10:20:07 crc kubenswrapper[4684]: I0121 10:20:07.713501 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-68688768b9-92w7t"] Jan 21 10:20:07 crc kubenswrapper[4684]: W0121 10:20:07.717028 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc1ef89_ee68_427f_bd4e_77d27c77f8c4.slice/crio-300dfff58e5b716cc1e802ccbd9208a9e82c6da9f950609fb6a4a92622a8c05d WatchSource:0}: Error finding container 300dfff58e5b716cc1e802ccbd9208a9e82c6da9f950609fb6a4a92622a8c05d: Status 404 returned error can't find the container with id 300dfff58e5b716cc1e802ccbd9208a9e82c6da9f950609fb6a4a92622a8c05d Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.423781 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" event={"ID":"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4","Type":"ContainerStarted","Data":"300dfff58e5b716cc1e802ccbd9208a9e82c6da9f950609fb6a4a92622a8c05d"} Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.753225 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6"] Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.754023 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.756654 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-9vvbf" Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.770964 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6"] Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.894202 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b8ec58ca-1a38-4f05-9394-90f7eac34be8-runner\") pod \"smart-gateway-operator-77c9d9f969-mbxq6\" (UID: \"b8ec58ca-1a38-4f05-9394-90f7eac34be8\") " pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.894462 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wt99\" (UniqueName: \"kubernetes.io/projected/b8ec58ca-1a38-4f05-9394-90f7eac34be8-kube-api-access-4wt99\") pod \"smart-gateway-operator-77c9d9f969-mbxq6\" (UID: \"b8ec58ca-1a38-4f05-9394-90f7eac34be8\") " pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.998136 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b8ec58ca-1a38-4f05-9394-90f7eac34be8-runner\") pod \"smart-gateway-operator-77c9d9f969-mbxq6\" (UID: \"b8ec58ca-1a38-4f05-9394-90f7eac34be8\") " pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.998307 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wt99\" (UniqueName: \"kubernetes.io/projected/b8ec58ca-1a38-4f05-9394-90f7eac34be8-kube-api-access-4wt99\") pod \"smart-gateway-operator-77c9d9f969-mbxq6\" (UID: \"b8ec58ca-1a38-4f05-9394-90f7eac34be8\") " pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:08 crc kubenswrapper[4684]: I0121 10:20:08.999245 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b8ec58ca-1a38-4f05-9394-90f7eac34be8-runner\") pod \"smart-gateway-operator-77c9d9f969-mbxq6\" (UID: \"b8ec58ca-1a38-4f05-9394-90f7eac34be8\") " pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:09 crc kubenswrapper[4684]: I0121 10:20:09.023923 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wt99\" (UniqueName: \"kubernetes.io/projected/b8ec58ca-1a38-4f05-9394-90f7eac34be8-kube-api-access-4wt99\") pod \"smart-gateway-operator-77c9d9f969-mbxq6\" (UID: \"b8ec58ca-1a38-4f05-9394-90f7eac34be8\") " pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:09 crc kubenswrapper[4684]: I0121 10:20:09.083492 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" Jan 21 10:20:09 crc kubenswrapper[4684]: I0121 10:20:09.317555 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6"] Jan 21 10:20:09 crc kubenswrapper[4684]: W0121 10:20:09.326642 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8ec58ca_1a38_4f05_9394_90f7eac34be8.slice/crio-fe6bd90a093e859239af81a6fe5f06a045176656b8d03fde7bab876e8e927a4f WatchSource:0}: Error finding container fe6bd90a093e859239af81a6fe5f06a045176656b8d03fde7bab876e8e927a4f: Status 404 returned error can't find the container with id fe6bd90a093e859239af81a6fe5f06a045176656b8d03fde7bab876e8e927a4f Jan 21 10:20:09 crc kubenswrapper[4684]: I0121 10:20:09.444745 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" event={"ID":"b8ec58ca-1a38-4f05-9394-90f7eac34be8","Type":"ContainerStarted","Data":"fe6bd90a093e859239af81a6fe5f06a045176656b8d03fde7bab876e8e927a4f"} Jan 21 10:20:10 crc kubenswrapper[4684]: I0121 10:20:10.877327 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-bpfb4"] Jan 21 10:20:10 crc kubenswrapper[4684]: I0121 10:20:10.879433 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" Jan 21 10:20:10 crc kubenswrapper[4684]: I0121 10:20:10.886162 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-bpfb4"] Jan 21 10:20:10 crc kubenswrapper[4684]: I0121 10:20:10.886754 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-d72br" Jan 21 10:20:11 crc kubenswrapper[4684]: I0121 10:20:11.022499 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5wn4\" (UniqueName: \"kubernetes.io/projected/3a7811c1-b3f1-436d-b98b-28a957a4a7bf-kube-api-access-b5wn4\") pod \"interconnect-operator-5bb49f789d-bpfb4\" (UID: \"3a7811c1-b3f1-436d-b98b-28a957a4a7bf\") " pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" Jan 21 10:20:11 crc kubenswrapper[4684]: I0121 10:20:11.124096 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5wn4\" (UniqueName: \"kubernetes.io/projected/3a7811c1-b3f1-436d-b98b-28a957a4a7bf-kube-api-access-b5wn4\") pod \"interconnect-operator-5bb49f789d-bpfb4\" (UID: \"3a7811c1-b3f1-436d-b98b-28a957a4a7bf\") " pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" Jan 21 10:20:11 crc kubenswrapper[4684]: I0121 10:20:11.142154 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5wn4\" (UniqueName: \"kubernetes.io/projected/3a7811c1-b3f1-436d-b98b-28a957a4a7bf-kube-api-access-b5wn4\") pod \"interconnect-operator-5bb49f789d-bpfb4\" (UID: \"3a7811c1-b3f1-436d-b98b-28a957a4a7bf\") " pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" Jan 21 10:20:11 crc kubenswrapper[4684]: I0121 10:20:11.204246 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" Jan 21 10:20:11 crc kubenswrapper[4684]: I0121 10:20:11.489566 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-bpfb4"] Jan 21 10:20:11 crc kubenswrapper[4684]: W0121 10:20:11.495266 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a7811c1_b3f1_436d_b98b_28a957a4a7bf.slice/crio-52983ac0a46cf85e87d53fd923182ae2bf8cce3da7063143fdf8a8e1e91429b1 WatchSource:0}: Error finding container 52983ac0a46cf85e87d53fd923182ae2bf8cce3da7063143fdf8a8e1e91429b1: Status 404 returned error can't find the container with id 52983ac0a46cf85e87d53fd923182ae2bf8cce3da7063143fdf8a8e1e91429b1 Jan 21 10:20:12 crc kubenswrapper[4684]: I0121 10:20:12.471261 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" event={"ID":"3a7811c1-b3f1-436d-b98b-28a957a4a7bf","Type":"ContainerStarted","Data":"52983ac0a46cf85e87d53fd923182ae2bf8cce3da7063143fdf8a8e1e91429b1"} Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.060570 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.061273 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1768085178,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4wt99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-77c9d9f969-mbxq6_service-telemetry(b8ec58ca-1a38-4f05-9394-90f7eac34be8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.062743 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" podUID="b8ec58ca-1a38-4f05-9394-90f7eac34be8" Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.505675 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/service-telemetry-operator:latest" Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.505857 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/service-telemetry-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:service-telemetry-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_WEBHOOK_SNMP_IMAGE,Value:quay.io/infrawatch/prometheus-webhook-snmp:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_IMAGE,Value:quay.io/prometheus/prometheus:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER_IMAGE,Value:quay.io/prometheus/alertmanager:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:service-telemetry-operator.v1.5.1768085182,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t89gp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-68688768b9-92w7t_service-telemetry(3cc1ef89-ee68-427f-bd4e-77d27c77f8c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.507903 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" podUID="3cc1ef89-ee68-427f-bd4e-77d27c77f8c4" Jan 21 10:20:37 crc kubenswrapper[4684]: I0121 10:20:37.645826 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" event={"ID":"3a7811c1-b3f1-436d-b98b-28a957a4a7bf","Type":"ContainerStarted","Data":"3c7dc6fa29963c16bdb760de6b3ef7ba314c09e41ef240dcc9ac096d4a3a7292"} Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.647759 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/service-telemetry-operator:latest\\\"\"" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" podUID="3cc1ef89-ee68-427f-bd4e-77d27c77f8c4" Jan 21 10:20:37 crc kubenswrapper[4684]: E0121 10:20:37.648410 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" podUID="b8ec58ca-1a38-4f05-9394-90f7eac34be8" Jan 21 10:20:37 crc kubenswrapper[4684]: I0121 10:20:37.672329 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-bpfb4" podStartSLOduration=5.281724457 podStartE2EDuration="27.672301006s" podCreationTimestamp="2026-01-21 10:20:10 +0000 UTC" firstStartedPulling="2026-01-21 10:20:11.49775054 +0000 UTC m=+849.255833507" lastFinishedPulling="2026-01-21 10:20:33.888327089 +0000 UTC m=+871.646410056" observedRunningTime="2026-01-21 10:20:37.663505894 +0000 UTC m=+875.421588861" watchObservedRunningTime="2026-01-21 10:20:37.672301006 +0000 UTC m=+875.430383993" Jan 21 10:20:49 crc kubenswrapper[4684]: I0121 10:20:49.724749 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" event={"ID":"b8ec58ca-1a38-4f05-9394-90f7eac34be8","Type":"ContainerStarted","Data":"8e6ca2cf1e61766dc90fbca212f94b64eb8a18b2a023efd847254848789f31ab"} Jan 21 10:20:49 crc kubenswrapper[4684]: I0121 10:20:49.750296 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-77c9d9f969-mbxq6" podStartSLOduration=2.010918243 podStartE2EDuration="41.750264527s" podCreationTimestamp="2026-01-21 10:20:08 +0000 UTC" firstStartedPulling="2026-01-21 10:20:09.330064889 +0000 UTC m=+847.088147856" lastFinishedPulling="2026-01-21 10:20:49.069411173 +0000 UTC m=+886.827494140" observedRunningTime="2026-01-21 10:20:49.740968169 +0000 UTC m=+887.499051206" watchObservedRunningTime="2026-01-21 10:20:49.750264527 +0000 UTC m=+887.508347504" Jan 21 10:20:52 crc kubenswrapper[4684]: I0121 10:20:52.746232 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" event={"ID":"3cc1ef89-ee68-427f-bd4e-77d27c77f8c4","Type":"ContainerStarted","Data":"7d4f71c65b84d89c8ff3e6bf71fcaa1be07a4f98e4bcc60a07dd46507836b822"} Jan 21 10:20:52 crc kubenswrapper[4684]: I0121 10:20:52.763454 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-68688768b9-92w7t" podStartSLOduration=1.519639603 podStartE2EDuration="45.763433616s" podCreationTimestamp="2026-01-21 10:20:07 +0000 UTC" firstStartedPulling="2026-01-21 10:20:07.718994339 +0000 UTC m=+845.477077306" lastFinishedPulling="2026-01-21 10:20:51.962788332 +0000 UTC m=+889.720871319" observedRunningTime="2026-01-21 10:20:52.762641192 +0000 UTC m=+890.520724159" watchObservedRunningTime="2026-01-21 10:20:52.763433616 +0000 UTC m=+890.521516683" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.568934 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-cssbq"] Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.571514 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.577182 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.577510 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.577699 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.577807 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.577907 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-6959v" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.578014 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.585203 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.603950 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-cssbq"] Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.627775 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mcz\" (UniqueName: \"kubernetes.io/projected/c5f7a6cf-b540-4f93-b633-481d62ebc626-kube-api-access-65mcz\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.627830 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.627878 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.627904 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.627952 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-users\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.627978 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.628006 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-config\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730271 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65mcz\" (UniqueName: \"kubernetes.io/projected/c5f7a6cf-b540-4f93-b633-481d62ebc626-kube-api-access-65mcz\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730355 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730429 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730458 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730528 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-users\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730566 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.730597 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-config\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.731943 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-config\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.739298 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-users\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.741948 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.742505 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.746242 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.747016 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.751917 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65mcz\" (UniqueName: \"kubernetes.io/projected/c5f7a6cf-b540-4f93-b633-481d62ebc626-kube-api-access-65mcz\") pod \"default-interconnect-68864d46cb-cssbq\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:14 crc kubenswrapper[4684]: I0121 10:21:14.889205 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:21:15 crc kubenswrapper[4684]: I0121 10:21:15.288414 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-cssbq"] Jan 21 10:21:15 crc kubenswrapper[4684]: I0121 10:21:15.917680 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" event={"ID":"c5f7a6cf-b540-4f93-b633-481d62ebc626","Type":"ContainerStarted","Data":"b2fd77c34b05e9a60dc775e8fdbc875aa047cf9e7ab5a40fd08f9815a9cdab3f"} Jan 21 10:21:20 crc kubenswrapper[4684]: I0121 10:21:20.958009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" event={"ID":"c5f7a6cf-b540-4f93-b633-481d62ebc626","Type":"ContainerStarted","Data":"3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48"} Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.676535 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" podStartSLOduration=3.208086181 podStartE2EDuration="8.67651427s" podCreationTimestamp="2026-01-21 10:21:14 +0000 UTC" firstStartedPulling="2026-01-21 10:21:15.296330124 +0000 UTC m=+913.054413091" lastFinishedPulling="2026-01-21 10:21:20.764758203 +0000 UTC m=+918.522841180" observedRunningTime="2026-01-21 10:21:20.981223476 +0000 UTC m=+918.739306443" watchObservedRunningTime="2026-01-21 10:21:22.67651427 +0000 UTC m=+920.434597237" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.681736 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b8tz5"] Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.683120 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.709891 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8tz5"] Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.845684 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-catalog-content\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.846228 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcwx\" (UniqueName: \"kubernetes.io/projected/44d8b092-f590-4b18-bf07-509f87cadb00-kube-api-access-ggcwx\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.846277 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-utilities\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.947634 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggcwx\" (UniqueName: \"kubernetes.io/projected/44d8b092-f590-4b18-bf07-509f87cadb00-kube-api-access-ggcwx\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.947693 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-utilities\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.947751 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-catalog-content\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.948310 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-utilities\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.948332 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-catalog-content\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:22 crc kubenswrapper[4684]: I0121 10:21:22.972606 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggcwx\" (UniqueName: \"kubernetes.io/projected/44d8b092-f590-4b18-bf07-509f87cadb00-kube-api-access-ggcwx\") pod \"community-operators-b8tz5\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:23 crc kubenswrapper[4684]: I0121 10:21:23.003251 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:23 crc kubenswrapper[4684]: I0121 10:21:23.314696 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8tz5"] Jan 21 10:21:23 crc kubenswrapper[4684]: I0121 10:21:23.976379 4684 generic.go:334] "Generic (PLEG): container finished" podID="44d8b092-f590-4b18-bf07-509f87cadb00" containerID="3c06d0ac8de9523695101da91248f6fd54909eae9f5f40705a062e3b4da9711b" exitCode=0 Jan 21 10:21:23 crc kubenswrapper[4684]: I0121 10:21:23.976502 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerDied","Data":"3c06d0ac8de9523695101da91248f6fd54909eae9f5f40705a062e3b4da9711b"} Jan 21 10:21:23 crc kubenswrapper[4684]: I0121 10:21:23.976753 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerStarted","Data":"28f053de35b2e905fd2b0a6266b00d5fe64193608e98a9a4e6603e95964d3052"} Jan 21 10:21:24 crc kubenswrapper[4684]: I0121 10:21:24.987305 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerStarted","Data":"ede157c54b3dc543fdebbbac59a68db773b310538782b1a29e5cf35b921761c4"} Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.076198 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.078653 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084151 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084282 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084438 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-w546w" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084449 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084282 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084553 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084643 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084824 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084869 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.084981 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.097538 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186137 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc166550-60f0-4fee-a249-db0689c07f60-config-out\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186185 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186226 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186258 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-config\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186291 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx9vc\" (UniqueName: \"kubernetes.io/projected/fc166550-60f0-4fee-a249-db0689c07f60-kube-api-access-zx9vc\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186346 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-web-config\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186471 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186514 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc166550-60f0-4fee-a249-db0689c07f60-tls-assets\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186555 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186633 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186684 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.186717 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287463 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287519 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287542 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287570 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc166550-60f0-4fee-a249-db0689c07f60-config-out\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287595 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287627 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287648 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-config\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287668 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx9vc\" (UniqueName: \"kubernetes.io/projected/fc166550-60f0-4fee-a249-db0689c07f60-kube-api-access-zx9vc\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287701 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-web-config\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287724 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287743 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc166550-60f0-4fee-a249-db0689c07f60-tls-assets\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.287758 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.288515 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: E0121 10:21:25.288614 4684 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 10:21:25 crc kubenswrapper[4684]: E0121 10:21:25.288666 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls podName:fc166550-60f0-4fee-a249-db0689c07f60 nodeName:}" failed. No retries permitted until 2026-01-21 10:21:25.788645656 +0000 UTC m=+923.546728623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "fc166550-60f0-4fee-a249-db0689c07f60") : secret "default-prometheus-proxy-tls" not found Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.289468 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.289986 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.290983 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fc166550-60f0-4fee-a249-db0689c07f60-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.294843 4684 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.294886 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/798cbb781f6736ed87e4e7c980902e0efe17468362dd3e418341bf465cc2e2c0/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.295964 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.296052 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fc166550-60f0-4fee-a249-db0689c07f60-config-out\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.296185 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-web-config\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.297395 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-config\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.303984 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fc166550-60f0-4fee-a249-db0689c07f60-tls-assets\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.308902 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx9vc\" (UniqueName: \"kubernetes.io/projected/fc166550-60f0-4fee-a249-db0689c07f60-kube-api-access-zx9vc\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.329453 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-19e23d8a-9144-4079-9dd3-5c20ff33918d\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.794046 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:25 crc kubenswrapper[4684]: E0121 10:21:25.794233 4684 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 10:21:25 crc kubenswrapper[4684]: E0121 10:21:25.794644 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls podName:fc166550-60f0-4fee-a249-db0689c07f60 nodeName:}" failed. No retries permitted until 2026-01-21 10:21:26.79461633 +0000 UTC m=+924.552699307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "fc166550-60f0-4fee-a249-db0689c07f60") : secret "default-prometheus-proxy-tls" not found Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.995055 4684 generic.go:334] "Generic (PLEG): container finished" podID="44d8b092-f590-4b18-bf07-509f87cadb00" containerID="ede157c54b3dc543fdebbbac59a68db773b310538782b1a29e5cf35b921761c4" exitCode=0 Jan 21 10:21:25 crc kubenswrapper[4684]: I0121 10:21:25.995138 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerDied","Data":"ede157c54b3dc543fdebbbac59a68db773b310538782b1a29e5cf35b921761c4"} Jan 21 10:21:26 crc kubenswrapper[4684]: I0121 10:21:26.810167 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:26 crc kubenswrapper[4684]: I0121 10:21:26.825198 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fc166550-60f0-4fee-a249-db0689c07f60-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fc166550-60f0-4fee-a249-db0689c07f60\") " pod="service-telemetry/prometheus-default-0" Jan 21 10:21:26 crc kubenswrapper[4684]: I0121 10:21:26.904433 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 10:21:27 crc kubenswrapper[4684]: I0121 10:21:27.016961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerStarted","Data":"7a25b1301706f309960a2c61d37d08b2ecaff8ce334d0f9b3f82028d4c90665d"} Jan 21 10:21:27 crc kubenswrapper[4684]: I0121 10:21:27.039147 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b8tz5" podStartSLOduration=2.609092749 podStartE2EDuration="5.039127069s" podCreationTimestamp="2026-01-21 10:21:22 +0000 UTC" firstStartedPulling="2026-01-21 10:21:23.978353783 +0000 UTC m=+921.736436740" lastFinishedPulling="2026-01-21 10:21:26.408388083 +0000 UTC m=+924.166471060" observedRunningTime="2026-01-21 10:21:27.037759266 +0000 UTC m=+924.795842233" watchObservedRunningTime="2026-01-21 10:21:27.039127069 +0000 UTC m=+924.797210036" Jan 21 10:21:27 crc kubenswrapper[4684]: I0121 10:21:27.128517 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 10:21:27 crc kubenswrapper[4684]: W0121 10:21:27.135046 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc166550_60f0_4fee_a249_db0689c07f60.slice/crio-0a498036463b87bdf4c5f77093f65bd1e75c9dea1ea33fbaa08dba87ae7b6711 WatchSource:0}: Error finding container 0a498036463b87bdf4c5f77093f65bd1e75c9dea1ea33fbaa08dba87ae7b6711: Status 404 returned error can't find the container with id 0a498036463b87bdf4c5f77093f65bd1e75c9dea1ea33fbaa08dba87ae7b6711 Jan 21 10:21:28 crc kubenswrapper[4684]: I0121 10:21:28.024972 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fc166550-60f0-4fee-a249-db0689c07f60","Type":"ContainerStarted","Data":"0a498036463b87bdf4c5f77093f65bd1e75c9dea1ea33fbaa08dba87ae7b6711"} Jan 21 10:21:33 crc kubenswrapper[4684]: I0121 10:21:33.004496 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:33 crc kubenswrapper[4684]: I0121 10:21:33.005198 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:33 crc kubenswrapper[4684]: I0121 10:21:33.050984 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:33 crc kubenswrapper[4684]: I0121 10:21:33.066474 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fc166550-60f0-4fee-a249-db0689c07f60","Type":"ContainerStarted","Data":"2aeba41c0a968ea11d7e3aa65abeed243f98f694882c419621b1be2704390759"} Jan 21 10:21:33 crc kubenswrapper[4684]: I0121 10:21:33.122670 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:33 crc kubenswrapper[4684]: I0121 10:21:33.289395 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8tz5"] Jan 21 10:21:35 crc kubenswrapper[4684]: I0121 10:21:35.080653 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b8tz5" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="registry-server" containerID="cri-o://7a25b1301706f309960a2c61d37d08b2ecaff8ce334d0f9b3f82028d4c90665d" gracePeriod=2 Jan 21 10:21:35 crc kubenswrapper[4684]: E0121 10:21:35.185837 4684 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44d8b092_f590_4b18_bf07_509f87cadb00.slice/crio-7a25b1301706f309960a2c61d37d08b2ecaff8ce334d0f9b3f82028d4c90665d.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.088198 4684 generic.go:334] "Generic (PLEG): container finished" podID="44d8b092-f590-4b18-bf07-509f87cadb00" containerID="7a25b1301706f309960a2c61d37d08b2ecaff8ce334d0f9b3f82028d4c90665d" exitCode=0 Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.088298 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerDied","Data":"7a25b1301706f309960a2c61d37d08b2ecaff8ce334d0f9b3f82028d4c90665d"} Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.420436 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45"] Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.421109 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.439976 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45"] Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.584476 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrnjp\" (UniqueName: \"kubernetes.io/projected/e1a540ef-a29d-4944-b473-16c7efe8d573-kube-api-access-mrnjp\") pod \"default-snmp-webhook-78bcbbdcff-zbs45\" (UID: \"e1a540ef-a29d-4944-b473-16c7efe8d573\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.685929 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrnjp\" (UniqueName: \"kubernetes.io/projected/e1a540ef-a29d-4944-b473-16c7efe8d573-kube-api-access-mrnjp\") pod \"default-snmp-webhook-78bcbbdcff-zbs45\" (UID: \"e1a540ef-a29d-4944-b473-16c7efe8d573\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.703257 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrnjp\" (UniqueName: \"kubernetes.io/projected/e1a540ef-a29d-4944-b473-16c7efe8d573-kube-api-access-mrnjp\") pod \"default-snmp-webhook-78bcbbdcff-zbs45\" (UID: \"e1a540ef-a29d-4944-b473-16c7efe8d573\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.735239 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" Jan 21 10:21:36 crc kubenswrapper[4684]: I0121 10:21:36.975539 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45"] Jan 21 10:21:36 crc kubenswrapper[4684]: W0121 10:21:36.990424 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a540ef_a29d_4944_b473_16c7efe8d573.slice/crio-1bfda87b5d37d9360f2ec08e63b9f5b751a9ef79b2c1c4ac44650b964454840e WatchSource:0}: Error finding container 1bfda87b5d37d9360f2ec08e63b9f5b751a9ef79b2c1c4ac44650b964454840e: Status 404 returned error can't find the container with id 1bfda87b5d37d9360f2ec08e63b9f5b751a9ef79b2c1c4ac44650b964454840e Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.042740 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.097346 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8tz5" event={"ID":"44d8b092-f590-4b18-bf07-509f87cadb00","Type":"ContainerDied","Data":"28f053de35b2e905fd2b0a6266b00d5fe64193608e98a9a4e6603e95964d3052"} Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.097378 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8tz5" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.097421 4684 scope.go:117] "RemoveContainer" containerID="7a25b1301706f309960a2c61d37d08b2ecaff8ce334d0f9b3f82028d4c90665d" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.099009 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" event={"ID":"e1a540ef-a29d-4944-b473-16c7efe8d573","Type":"ContainerStarted","Data":"1bfda87b5d37d9360f2ec08e63b9f5b751a9ef79b2c1c4ac44650b964454840e"} Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.117206 4684 scope.go:117] "RemoveContainer" containerID="ede157c54b3dc543fdebbbac59a68db773b310538782b1a29e5cf35b921761c4" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.137519 4684 scope.go:117] "RemoveContainer" containerID="3c06d0ac8de9523695101da91248f6fd54909eae9f5f40705a062e3b4da9711b" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.192745 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcwx\" (UniqueName: \"kubernetes.io/projected/44d8b092-f590-4b18-bf07-509f87cadb00-kube-api-access-ggcwx\") pod \"44d8b092-f590-4b18-bf07-509f87cadb00\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.192888 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-catalog-content\") pod \"44d8b092-f590-4b18-bf07-509f87cadb00\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.192916 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-utilities\") pod \"44d8b092-f590-4b18-bf07-509f87cadb00\" (UID: \"44d8b092-f590-4b18-bf07-509f87cadb00\") " Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.194053 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-utilities" (OuterVolumeSpecName: "utilities") pod "44d8b092-f590-4b18-bf07-509f87cadb00" (UID: "44d8b092-f590-4b18-bf07-509f87cadb00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.198725 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d8b092-f590-4b18-bf07-509f87cadb00-kube-api-access-ggcwx" (OuterVolumeSpecName: "kube-api-access-ggcwx") pod "44d8b092-f590-4b18-bf07-509f87cadb00" (UID: "44d8b092-f590-4b18-bf07-509f87cadb00"). InnerVolumeSpecName "kube-api-access-ggcwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.244069 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44d8b092-f590-4b18-bf07-509f87cadb00" (UID: "44d8b092-f590-4b18-bf07-509f87cadb00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.294805 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.294856 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44d8b092-f590-4b18-bf07-509f87cadb00-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.294868 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggcwx\" (UniqueName: \"kubernetes.io/projected/44d8b092-f590-4b18-bf07-509f87cadb00-kube-api-access-ggcwx\") on node \"crc\" DevicePath \"\"" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.302137 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.302206 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.433157 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8tz5"] Jan 21 10:21:37 crc kubenswrapper[4684]: I0121 10:21:37.442347 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b8tz5"] Jan 21 10:21:38 crc kubenswrapper[4684]: I0121 10:21:38.525668 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" path="/var/lib/kubelet/pods/44d8b092-f590-4b18-bf07-509f87cadb00/volumes" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.115494 4684 generic.go:334] "Generic (PLEG): container finished" podID="fc166550-60f0-4fee-a249-db0689c07f60" containerID="2aeba41c0a968ea11d7e3aa65abeed243f98f694882c419621b1be2704390759" exitCode=0 Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.115535 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fc166550-60f0-4fee-a249-db0689c07f60","Type":"ContainerDied","Data":"2aeba41c0a968ea11d7e3aa65abeed243f98f694882c419621b1be2704390759"} Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.566185 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 10:21:39 crc kubenswrapper[4684]: E0121 10:21:39.566865 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="registry-server" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.566882 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="registry-server" Jan 21 10:21:39 crc kubenswrapper[4684]: E0121 10:21:39.566897 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="extract-utilities" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.566904 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="extract-utilities" Jan 21 10:21:39 crc kubenswrapper[4684]: E0121 10:21:39.566918 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="extract-content" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.566926 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="extract-content" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.567121 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d8b092-f590-4b18-bf07-509f87cadb00" containerName="registry-server" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.568509 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.571837 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.571875 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.571938 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.572562 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.572895 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.573164 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-lsg4d" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.574503 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744190 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-web-config\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744237 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48xm\" (UniqueName: \"kubernetes.io/projected/6e76ba58-56c8-4465-bcee-35a5b361608b-kube-api-access-l48xm\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744259 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744345 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e76ba58-56c8-4465-bcee-35a5b361608b-config-out\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744426 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-config-volume\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744463 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e76ba58-56c8-4465-bcee-35a5b361608b-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744506 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744581 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.744626 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845705 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845776 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-web-config\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845801 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48xm\" (UniqueName: \"kubernetes.io/projected/6e76ba58-56c8-4465-bcee-35a5b361608b-kube-api-access-l48xm\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845826 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845880 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e76ba58-56c8-4465-bcee-35a5b361608b-config-out\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845926 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-config-volume\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845953 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e76ba58-56c8-4465-bcee-35a5b361608b-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.845980 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.846022 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: E0121 10:21:39.848805 4684 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 10:21:39 crc kubenswrapper[4684]: E0121 10:21:39.848869 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls podName:6e76ba58-56c8-4465-bcee-35a5b361608b nodeName:}" failed. No retries permitted until 2026-01-21 10:21:40.348852246 +0000 UTC m=+938.106935213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6e76ba58-56c8-4465-bcee-35a5b361608b") : secret "default-alertmanager-proxy-tls" not found Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.853537 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e76ba58-56c8-4465-bcee-35a5b361608b-config-out\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.853633 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.853703 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-config-volume\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.853703 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-web-config\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.854840 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e76ba58-56c8-4465-bcee-35a5b361608b-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.859088 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.859627 4684 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.859657 4684 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1dd128aa32bf23746276fa28119187401cda327c5ab1754cac80f4dff1ddf1c/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.868351 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48xm\" (UniqueName: \"kubernetes.io/projected/6e76ba58-56c8-4465-bcee-35a5b361608b-kube-api-access-l48xm\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:39 crc kubenswrapper[4684]: I0121 10:21:39.889020 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-23acef1e-5b9b-458d-a9d5-49b95353ddc4\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:40 crc kubenswrapper[4684]: I0121 10:21:40.354923 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:40 crc kubenswrapper[4684]: E0121 10:21:40.355128 4684 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 10:21:40 crc kubenswrapper[4684]: E0121 10:21:40.355215 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls podName:6e76ba58-56c8-4465-bcee-35a5b361608b nodeName:}" failed. No retries permitted until 2026-01-21 10:21:41.355195952 +0000 UTC m=+939.113278919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6e76ba58-56c8-4465-bcee-35a5b361608b") : secret "default-alertmanager-proxy-tls" not found Jan 21 10:21:41 crc kubenswrapper[4684]: I0121 10:21:41.373483 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:41 crc kubenswrapper[4684]: E0121 10:21:41.373930 4684 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 10:21:41 crc kubenswrapper[4684]: E0121 10:21:41.373982 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls podName:6e76ba58-56c8-4465-bcee-35a5b361608b nodeName:}" failed. No retries permitted until 2026-01-21 10:21:43.373965752 +0000 UTC m=+941.132048719 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6e76ba58-56c8-4465-bcee-35a5b361608b") : secret "default-alertmanager-proxy-tls" not found Jan 21 10:21:43 crc kubenswrapper[4684]: I0121 10:21:43.404201 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:43 crc kubenswrapper[4684]: I0121 10:21:43.409904 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e76ba58-56c8-4465-bcee-35a5b361608b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6e76ba58-56c8-4465-bcee-35a5b361608b\") " pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:43 crc kubenswrapper[4684]: I0121 10:21:43.492949 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 10:21:49 crc kubenswrapper[4684]: I0121 10:21:49.726905 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 10:21:50 crc kubenswrapper[4684]: I0121 10:21:50.206735 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fc166550-60f0-4fee-a249-db0689c07f60","Type":"ContainerStarted","Data":"1cda8bf6eba8c846ccda630dfd04c77ba21909230ce0892b3d3e97229890617a"} Jan 21 10:21:50 crc kubenswrapper[4684]: I0121 10:21:50.208166 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6e76ba58-56c8-4465-bcee-35a5b361608b","Type":"ContainerStarted","Data":"f8fcd47a638ff9f5ca8fa7734a8e44516716c4a2df08f4771201859f1db069f2"} Jan 21 10:21:50 crc kubenswrapper[4684]: I0121 10:21:50.209898 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" event={"ID":"e1a540ef-a29d-4944-b473-16c7efe8d573","Type":"ContainerStarted","Data":"598d53ee34a31bd794f6dbbfa85016d003f4f083913461c714518e545d4df186"} Jan 21 10:21:50 crc kubenswrapper[4684]: I0121 10:21:50.226330 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-zbs45" podStartSLOduration=1.558402302 podStartE2EDuration="14.226310923s" podCreationTimestamp="2026-01-21 10:21:36 +0000 UTC" firstStartedPulling="2026-01-21 10:21:36.992643407 +0000 UTC m=+934.750726374" lastFinishedPulling="2026-01-21 10:21:49.660552018 +0000 UTC m=+947.418634995" observedRunningTime="2026-01-21 10:21:50.221805134 +0000 UTC m=+947.979888111" watchObservedRunningTime="2026-01-21 10:21:50.226310923 +0000 UTC m=+947.984393880" Jan 21 10:21:52 crc kubenswrapper[4684]: I0121 10:21:52.225043 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fc166550-60f0-4fee-a249-db0689c07f60","Type":"ContainerStarted","Data":"486287f06b1f9bad0f98e62f9ca98502aae2045485fff571f0ba1287f94c22e8"} Jan 21 10:21:52 crc kubenswrapper[4684]: I0121 10:21:52.226511 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6e76ba58-56c8-4465-bcee-35a5b361608b","Type":"ContainerStarted","Data":"d09cc88f1742c2ee344ff9afd2bed23ac8df4556f4e6467ff1a06f6fa4e7ac98"} Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.756650 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n"] Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.758417 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.763957 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n"] Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.765929 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.766432 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-74jgw" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.766468 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.771141 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.851598 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.851653 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.851678 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bpwf\" (UniqueName: \"kubernetes.io/projected/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-kube-api-access-7bpwf\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.852041 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.852200 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.953054 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.953117 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.953151 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bpwf\" (UniqueName: \"kubernetes.io/projected/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-kube-api-access-7bpwf\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.953252 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.953287 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: E0121 10:21:53.953475 4684 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:21:53 crc kubenswrapper[4684]: E0121 10:21:53.953560 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls podName:e2618a9c-cdc6-48e5-8b8e-eeb451329b9c nodeName:}" failed. No retries permitted until 2026-01-21 10:21:54.453539094 +0000 UTC m=+952.211622061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" (UID: "e2618a9c-cdc6-48e5-8b8e-eeb451329b9c") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.953915 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.954380 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.959326 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:53 crc kubenswrapper[4684]: I0121 10:21:53.975284 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bpwf\" (UniqueName: \"kubernetes.io/projected/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-kube-api-access-7bpwf\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:54 crc kubenswrapper[4684]: I0121 10:21:54.458472 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:54 crc kubenswrapper[4684]: E0121 10:21:54.458529 4684 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:21:54 crc kubenswrapper[4684]: E0121 10:21:54.459001 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls podName:e2618a9c-cdc6-48e5-8b8e-eeb451329b9c nodeName:}" failed. No retries permitted until 2026-01-21 10:21:55.458961821 +0000 UTC m=+953.217044798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" (UID: "e2618a9c-cdc6-48e5-8b8e-eeb451329b9c") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 10:21:55 crc kubenswrapper[4684]: I0121 10:21:55.478587 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:55 crc kubenswrapper[4684]: I0121 10:21:55.487996 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2618a9c-cdc6-48e5-8b8e-eeb451329b9c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-6859n\" (UID: \"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:55 crc kubenswrapper[4684]: I0121 10:21:55.573957 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.672144 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4"] Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.673996 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.677236 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.677306 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.683708 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4"] Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.702494 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a37b014b-6f8d-4179-be35-4896b00e0aec-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.702563 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a37b014b-6f8d-4179-be35-4896b00e0aec-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.702621 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkft\" (UniqueName: \"kubernetes.io/projected/a37b014b-6f8d-4179-be35-4896b00e0aec-kube-api-access-2tkft\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.702656 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.702677 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.804140 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tkft\" (UniqueName: \"kubernetes.io/projected/a37b014b-6f8d-4179-be35-4896b00e0aec-kube-api-access-2tkft\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.804519 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: E0121 10:21:56.804674 4684 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:21:56 crc kubenswrapper[4684]: E0121 10:21:56.804783 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls podName:a37b014b-6f8d-4179-be35-4896b00e0aec nodeName:}" failed. No retries permitted until 2026-01-21 10:21:57.3047575 +0000 UTC m=+955.062840487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" (UID: "a37b014b-6f8d-4179-be35-4896b00e0aec") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.804815 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.805184 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a37b014b-6f8d-4179-be35-4896b00e0aec-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.805381 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a37b014b-6f8d-4179-be35-4896b00e0aec-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.805574 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a37b014b-6f8d-4179-be35-4896b00e0aec-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.806656 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a37b014b-6f8d-4179-be35-4896b00e0aec-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.823184 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:56 crc kubenswrapper[4684]: I0121 10:21:56.827632 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tkft\" (UniqueName: \"kubernetes.io/projected/a37b014b-6f8d-4179-be35-4896b00e0aec-kube-api-access-2tkft\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:57 crc kubenswrapper[4684]: I0121 10:21:57.311639 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:57 crc kubenswrapper[4684]: E0121 10:21:57.311858 4684 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:21:57 crc kubenswrapper[4684]: E0121 10:21:57.311954 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls podName:a37b014b-6f8d-4179-be35-4896b00e0aec nodeName:}" failed. No retries permitted until 2026-01-21 10:21:58.311934061 +0000 UTC m=+956.070017028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" (UID: "a37b014b-6f8d-4179-be35-4896b00e0aec") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 10:21:58 crc kubenswrapper[4684]: I0121 10:21:58.325415 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:58 crc kubenswrapper[4684]: I0121 10:21:58.332339 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a37b014b-6f8d-4179-be35-4896b00e0aec-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4\" (UID: \"a37b014b-6f8d-4179-be35-4896b00e0aec\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:58 crc kubenswrapper[4684]: I0121 10:21:58.492173 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n"] Jan 21 10:21:58 crc kubenswrapper[4684]: I0121 10:21:58.506435 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" Jan 21 10:21:58 crc kubenswrapper[4684]: I0121 10:21:58.934984 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4"] Jan 21 10:21:58 crc kubenswrapper[4684]: W0121 10:21:58.939693 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37b014b_6f8d_4179_be35_4896b00e0aec.slice/crio-a1b13f10e0ef2fd7e0b48cc769db92a543f0881fe7784aaafda3211e1f64b087 WatchSource:0}: Error finding container a1b13f10e0ef2fd7e0b48cc769db92a543f0881fe7784aaafda3211e1f64b087: Status 404 returned error can't find the container with id a1b13f10e0ef2fd7e0b48cc769db92a543f0881fe7784aaafda3211e1f64b087 Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.280751 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fc166550-60f0-4fee-a249-db0689c07f60","Type":"ContainerStarted","Data":"b46f641b602e237e11bb1909cba3709fe1e2fddb2c58a66b6e0fbc1a01d70600"} Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.282625 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerStarted","Data":"fe682b6f9bdbed755581282aebff9689063c84ea34420af3cab1ebf3e478b8f9"} Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.282672 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerStarted","Data":"853d07dbf4df30c4575787145147f2ebfc1a04a6083e2a5fcfbe2f7ab850f7ed"} Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.284552 4684 generic.go:334] "Generic (PLEG): container finished" podID="6e76ba58-56c8-4465-bcee-35a5b361608b" containerID="d09cc88f1742c2ee344ff9afd2bed23ac8df4556f4e6467ff1a06f6fa4e7ac98" exitCode=0 Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.284617 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6e76ba58-56c8-4465-bcee-35a5b361608b","Type":"ContainerDied","Data":"d09cc88f1742c2ee344ff9afd2bed23ac8df4556f4e6467ff1a06f6fa4e7ac98"} Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.285721 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerStarted","Data":"a1b13f10e0ef2fd7e0b48cc769db92a543f0881fe7784aaafda3211e1f64b087"} Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.315811 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.052855644 podStartE2EDuration="35.315790474s" podCreationTimestamp="2026-01-21 10:21:24 +0000 UTC" firstStartedPulling="2026-01-21 10:21:27.13793552 +0000 UTC m=+924.896018497" lastFinishedPulling="2026-01-21 10:21:58.40087036 +0000 UTC m=+956.158953327" observedRunningTime="2026-01-21 10:21:59.311293525 +0000 UTC m=+957.069376492" watchObservedRunningTime="2026-01-21 10:21:59.315790474 +0000 UTC m=+957.073873451" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.757940 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx"] Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.760230 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.765609 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.765792 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.772882 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx"] Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.949584 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6ea3f691-f7ad-4d69-97ef-3427114b483b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.949665 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.949684 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6ea3f691-f7ad-4d69-97ef-3427114b483b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.949720 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:21:59 crc kubenswrapper[4684]: I0121 10:21:59.949738 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd95r\" (UniqueName: \"kubernetes.io/projected/6ea3f691-f7ad-4d69-97ef-3427114b483b-kube-api-access-kd95r\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.050641 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6ea3f691-f7ad-4d69-97ef-3427114b483b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.050727 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.050745 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6ea3f691-f7ad-4d69-97ef-3427114b483b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.050778 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.050793 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd95r\" (UniqueName: \"kubernetes.io/projected/6ea3f691-f7ad-4d69-97ef-3427114b483b-kube-api-access-kd95r\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.051641 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6ea3f691-f7ad-4d69-97ef-3427114b483b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: E0121 10:22:00.051707 4684 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:00 crc kubenswrapper[4684]: E0121 10:22:00.051746 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls podName:6ea3f691-f7ad-4d69-97ef-3427114b483b nodeName:}" failed. No retries permitted until 2026-01-21 10:22:00.551731987 +0000 UTC m=+958.309814954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" (UID: "6ea3f691-f7ad-4d69-97ef-3427114b483b") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.052499 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6ea3f691-f7ad-4d69-97ef-3427114b483b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.074504 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.075970 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd95r\" (UniqueName: \"kubernetes.io/projected/6ea3f691-f7ad-4d69-97ef-3427114b483b-kube-api-access-kd95r\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.297647 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerStarted","Data":"44657da6002b2ec82cd3f9f3aedc07e0716a7738c05ca9b7b242c71c81f886d8"} Jan 21 10:22:00 crc kubenswrapper[4684]: I0121 10:22:00.557768 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:00 crc kubenswrapper[4684]: E0121 10:22:00.557929 4684 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:00 crc kubenswrapper[4684]: E0121 10:22:00.558010 4684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls podName:6ea3f691-f7ad-4d69-97ef-3427114b483b nodeName:}" failed. No retries permitted until 2026-01-21 10:22:01.557992211 +0000 UTC m=+959.316075178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" (UID: "6ea3f691-f7ad-4d69-97ef-3427114b483b") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 10:22:01 crc kubenswrapper[4684]: I0121 10:22:01.570946 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:01 crc kubenswrapper[4684]: I0121 10:22:01.580682 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6ea3f691-f7ad-4d69-97ef-3427114b483b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx\" (UID: \"6ea3f691-f7ad-4d69-97ef-3427114b483b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:01 crc kubenswrapper[4684]: I0121 10:22:01.594474 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" Jan 21 10:22:01 crc kubenswrapper[4684]: I0121 10:22:01.905745 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.302009 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.302618 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.641299 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2"] Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.645178 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.648900 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2"] Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.649263 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.649469 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.768689 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjkx\" (UniqueName: \"kubernetes.io/projected/a606d589-c44b-4751-b3ff-54b86dd83209-kube-api-access-qtjkx\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.768937 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a606d589-c44b-4751-b3ff-54b86dd83209-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.769035 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a606d589-c44b-4751-b3ff-54b86dd83209-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.769071 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a606d589-c44b-4751-b3ff-54b86dd83209-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.870384 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjkx\" (UniqueName: \"kubernetes.io/projected/a606d589-c44b-4751-b3ff-54b86dd83209-kube-api-access-qtjkx\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.870483 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a606d589-c44b-4751-b3ff-54b86dd83209-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.870513 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a606d589-c44b-4751-b3ff-54b86dd83209-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.870538 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a606d589-c44b-4751-b3ff-54b86dd83209-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.871291 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a606d589-c44b-4751-b3ff-54b86dd83209-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.871781 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a606d589-c44b-4751-b3ff-54b86dd83209-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.878024 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/a606d589-c44b-4751-b3ff-54b86dd83209-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.888307 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjkx\" (UniqueName: \"kubernetes.io/projected/a606d589-c44b-4751-b3ff-54b86dd83209-kube-api-access-qtjkx\") pod \"default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2\" (UID: \"a606d589-c44b-4751-b3ff-54b86dd83209\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:07 crc kubenswrapper[4684]: I0121 10:22:07.971023 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.333903 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5"] Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.335056 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.336961 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.355259 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5"] Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.492146 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7af3e6f3-abfd-489c-9d2b-2c1469076565-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.492240 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7af3e6f3-abfd-489c-9d2b-2c1469076565-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.492706 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7af3e6f3-abfd-489c-9d2b-2c1469076565-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.492777 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swjp\" (UniqueName: \"kubernetes.io/projected/7af3e6f3-abfd-489c-9d2b-2c1469076565-kube-api-access-4swjp\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.593991 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7af3e6f3-abfd-489c-9d2b-2c1469076565-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.594118 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7af3e6f3-abfd-489c-9d2b-2c1469076565-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.594151 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swjp\" (UniqueName: \"kubernetes.io/projected/7af3e6f3-abfd-489c-9d2b-2c1469076565-kube-api-access-4swjp\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.594209 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7af3e6f3-abfd-489c-9d2b-2c1469076565-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.594610 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7af3e6f3-abfd-489c-9d2b-2c1469076565-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.594967 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7af3e6f3-abfd-489c-9d2b-2c1469076565-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.599857 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7af3e6f3-abfd-489c-9d2b-2c1469076565-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.609715 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swjp\" (UniqueName: \"kubernetes.io/projected/7af3e6f3-abfd-489c-9d2b-2c1469076565-kube-api-access-4swjp\") pod \"default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5\" (UID: \"7af3e6f3-abfd-489c-9d2b-2c1469076565\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:09 crc kubenswrapper[4684]: I0121 10:22:09.666856 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.053474 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx"] Jan 21 10:22:11 crc kubenswrapper[4684]: W0121 10:22:11.078177 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ea3f691_f7ad_4d69_97ef_3427114b483b.slice/crio-c5b1b840632bb2404233e7943dc80d2c31676713774ceb83291a02be72518c42 WatchSource:0}: Error finding container c5b1b840632bb2404233e7943dc80d2c31676713774ceb83291a02be72518c42: Status 404 returned error can't find the container with id c5b1b840632bb2404233e7943dc80d2c31676713774ceb83291a02be72518c42 Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.108479 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2"] Jan 21 10:22:11 crc kubenswrapper[4684]: W0121 10:22:11.119100 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda606d589_c44b_4751_b3ff_54b86dd83209.slice/crio-7f3798fdd561796589200baa4ed144d5ebb4dfe4fa953bca396797b9eeed1932 WatchSource:0}: Error finding container 7f3798fdd561796589200baa4ed144d5ebb4dfe4fa953bca396797b9eeed1932: Status 404 returned error can't find the container with id 7f3798fdd561796589200baa4ed144d5ebb4dfe4fa953bca396797b9eeed1932 Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.342342 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5"] Jan 21 10:22:11 crc kubenswrapper[4684]: W0121 10:22:11.351202 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7af3e6f3_abfd_489c_9d2b_2c1469076565.slice/crio-4ae6dbf631ebbbd797038d1ffe20370112887ee0dd1011be247857d949f4a955 WatchSource:0}: Error finding container 4ae6dbf631ebbbd797038d1ffe20370112887ee0dd1011be247857d949f4a955: Status 404 returned error can't find the container with id 4ae6dbf631ebbbd797038d1ffe20370112887ee0dd1011be247857d949f4a955 Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.397627 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerStarted","Data":"c5b1b840632bb2404233e7943dc80d2c31676713774ceb83291a02be72518c42"} Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.403096 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerStarted","Data":"2f73b7ee5af33ba26b7cb89cfe16400e89ad62c09e6adfe48a3bdf61771901a4"} Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.404160 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerStarted","Data":"7f3798fdd561796589200baa4ed144d5ebb4dfe4fa953bca396797b9eeed1932"} Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.405978 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerStarted","Data":"bbe2e111681af328bf7e340edde7ee25c85651b77375fc735aedef3df8820c8b"} Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.407807 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerStarted","Data":"4ae6dbf631ebbbd797038d1ffe20370112887ee0dd1011be247857d949f4a955"} Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.411409 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6e76ba58-56c8-4465-bcee-35a5b361608b","Type":"ContainerStarted","Data":"fcd72b1691f1f3d165de379378ff89eb7bd6bb2633c49099ef0c2ac5c427ab03"} Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.905488 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:11 crc kubenswrapper[4684]: I0121 10:22:11.954429 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:12 crc kubenswrapper[4684]: I0121 10:22:12.420815 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerStarted","Data":"2255e03e59b145a3619a1e0d252c95d91abccb1b2c7650c2d5c8cc409b30458e"} Jan 21 10:22:12 crc kubenswrapper[4684]: I0121 10:22:12.423419 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerStarted","Data":"14cb292f8d2c04639ef0bb44c9df4cba45df2c145fac2298875eda2c5cfe120d"} Jan 21 10:22:12 crc kubenswrapper[4684]: I0121 10:22:12.425796 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerStarted","Data":"da2ff09df35f45554ff2b764b46eb1f9294461df89724b8fdbccfb3f9fb52f58"} Jan 21 10:22:12 crc kubenswrapper[4684]: I0121 10:22:12.618697 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 21 10:22:13 crc kubenswrapper[4684]: I0121 10:22:13.438449 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6e76ba58-56c8-4465-bcee-35a5b361608b","Type":"ContainerStarted","Data":"00b2a121e1f5afe1895c85139f254bd36f6507fd1bcbfd8819a994578fafc8a7"} Jan 21 10:22:13 crc kubenswrapper[4684]: I0121 10:22:13.440276 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerStarted","Data":"606119a6e8a19cd993c1cf879e9c6f7794e777dcf78ac11e514b1d55a29809e3"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.469338 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6e76ba58-56c8-4465-bcee-35a5b361608b","Type":"ContainerStarted","Data":"d9bf28fd6d19b80f07ddc9bc6d703debaec736031c479ab582262d5b0d1c2269"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.472203 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerStarted","Data":"c5b53646cbd56418567765f23d2a2c995b90c67a0503982d4b02117fc943d3db"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.474961 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerStarted","Data":"d5d719962ebdf87919119e78b5dfafd38b42a77c07558328d984b2a8c31f9666"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.477256 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerStarted","Data":"32959a97c6230afc540ff89216bfc1437e6d019f570faad6cf28335d5de7d59b"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.480111 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerStarted","Data":"164ebec5e97b5e7725a17fad97a4884a035c074c9562f15d3b54c4cdb8b6d3ef"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.482182 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerStarted","Data":"23eb113dc97e743b730555f5b958804a0d5eb912f2c0db06555aac8f0c6f9393"} Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.498648 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=25.260708414 podStartE2EDuration="39.498630844s" podCreationTimestamp="2026-01-21 10:21:38 +0000 UTC" firstStartedPulling="2026-01-21 10:21:59.285738128 +0000 UTC m=+957.043821095" lastFinishedPulling="2026-01-21 10:22:13.523660558 +0000 UTC m=+971.281743525" observedRunningTime="2026-01-21 10:22:17.497959793 +0000 UTC m=+975.256042760" watchObservedRunningTime="2026-01-21 10:22:17.498630844 +0000 UTC m=+975.256713811" Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.526973 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" podStartSLOduration=2.9403573400000003 podStartE2EDuration="8.526952586s" podCreationTimestamp="2026-01-21 10:22:09 +0000 UTC" firstStartedPulling="2026-01-21 10:22:11.354541328 +0000 UTC m=+969.112624295" lastFinishedPulling="2026-01-21 10:22:16.941136574 +0000 UTC m=+974.699219541" observedRunningTime="2026-01-21 10:22:17.523765738 +0000 UTC m=+975.281848705" watchObservedRunningTime="2026-01-21 10:22:17.526952586 +0000 UTC m=+975.285035553" Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.548309 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" podStartSLOduration=12.695919797 podStartE2EDuration="18.548285023s" podCreationTimestamp="2026-01-21 10:21:59 +0000 UTC" firstStartedPulling="2026-01-21 10:22:11.106596946 +0000 UTC m=+968.864679913" lastFinishedPulling="2026-01-21 10:22:16.958962162 +0000 UTC m=+974.717045139" observedRunningTime="2026-01-21 10:22:17.548225781 +0000 UTC m=+975.306308748" watchObservedRunningTime="2026-01-21 10:22:17.548285023 +0000 UTC m=+975.306368000" Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.569889 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" podStartSLOduration=6.177242819 podStartE2EDuration="24.569823275s" podCreationTimestamp="2026-01-21 10:21:53 +0000 UTC" firstStartedPulling="2026-01-21 10:21:58.528535771 +0000 UTC m=+956.286618738" lastFinishedPulling="2026-01-21 10:22:16.921116227 +0000 UTC m=+974.679199194" observedRunningTime="2026-01-21 10:22:17.567931787 +0000 UTC m=+975.326014784" watchObservedRunningTime="2026-01-21 10:22:17.569823275 +0000 UTC m=+975.327906242" Jan 21 10:22:17 crc kubenswrapper[4684]: I0121 10:22:17.604165 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" podStartSLOduration=3.557694669 podStartE2EDuration="21.604141662s" podCreationTimestamp="2026-01-21 10:21:56 +0000 UTC" firstStartedPulling="2026-01-21 10:21:58.943897725 +0000 UTC m=+956.701980692" lastFinishedPulling="2026-01-21 10:22:16.990344718 +0000 UTC m=+974.748427685" observedRunningTime="2026-01-21 10:22:17.600219091 +0000 UTC m=+975.358302058" watchObservedRunningTime="2026-01-21 10:22:17.604141662 +0000 UTC m=+975.362224629" Jan 21 10:22:21 crc kubenswrapper[4684]: I0121 10:22:21.793174 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" podStartSLOduration=9.041528472 podStartE2EDuration="14.793147797s" podCreationTimestamp="2026-01-21 10:22:07 +0000 UTC" firstStartedPulling="2026-01-21 10:22:11.141529151 +0000 UTC m=+968.899612118" lastFinishedPulling="2026-01-21 10:22:16.893148466 +0000 UTC m=+974.651231443" observedRunningTime="2026-01-21 10:22:17.618277067 +0000 UTC m=+975.376360044" watchObservedRunningTime="2026-01-21 10:22:21.793147797 +0000 UTC m=+979.551230784" Jan 21 10:22:21 crc kubenswrapper[4684]: I0121 10:22:21.796116 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-cssbq"] Jan 21 10:22:21 crc kubenswrapper[4684]: I0121 10:22:21.796816 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" podUID="c5f7a6cf-b540-4f93-b633-481d62ebc626" containerName="default-interconnect" containerID="cri-o://3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48" gracePeriod=30 Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.190534 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.311587 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-users\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.311686 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-credentials\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.311717 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65mcz\" (UniqueName: \"kubernetes.io/projected/c5f7a6cf-b540-4f93-b633-481d62ebc626-kube-api-access-65mcz\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.311769 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-config\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.311808 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-ca\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.311924 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-ca\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.312015 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-credentials\") pod \"c5f7a6cf-b540-4f93-b633-481d62ebc626\" (UID: \"c5f7a6cf-b540-4f93-b633-481d62ebc626\") " Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.312317 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.313210 4684 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.317631 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f7a6cf-b540-4f93-b633-481d62ebc626-kube-api-access-65mcz" (OuterVolumeSpecName: "kube-api-access-65mcz") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "kube-api-access-65mcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.318827 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.318842 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.319564 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.326530 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.327015 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "c5f7a6cf-b540-4f93-b633-481d62ebc626" (UID: "c5f7a6cf-b540-4f93-b633-481d62ebc626"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.415093 4684 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.415139 4684 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.415159 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65mcz\" (UniqueName: \"kubernetes.io/projected/c5f7a6cf-b540-4f93-b633-481d62ebc626-kube-api-access-65mcz\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.415173 4684 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.415189 4684 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.415201 4684 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c5f7a6cf-b540-4f93-b633-481d62ebc626-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.523934 4684 generic.go:334] "Generic (PLEG): container finished" podID="e2618a9c-cdc6-48e5-8b8e-eeb451329b9c" containerID="bbe2e111681af328bf7e340edde7ee25c85651b77375fc735aedef3df8820c8b" exitCode=0 Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.530918 4684 generic.go:334] "Generic (PLEG): container finished" podID="7af3e6f3-abfd-489c-9d2b-2c1469076565" containerID="14cb292f8d2c04639ef0bb44c9df4cba45df2c145fac2298875eda2c5cfe120d" exitCode=0 Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.542528 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerDied","Data":"bbe2e111681af328bf7e340edde7ee25c85651b77375fc735aedef3df8820c8b"} Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.542615 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerDied","Data":"14cb292f8d2c04639ef0bb44c9df4cba45df2c145fac2298875eda2c5cfe120d"} Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.544062 4684 scope.go:117] "RemoveContainer" containerID="14cb292f8d2c04639ef0bb44c9df4cba45df2c145fac2298875eda2c5cfe120d" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.544218 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.544099 4684 generic.go:334] "Generic (PLEG): container finished" podID="c5f7a6cf-b540-4f93-b633-481d62ebc626" containerID="3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48" exitCode=0 Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.545007 4684 scope.go:117] "RemoveContainer" containerID="bbe2e111681af328bf7e340edde7ee25c85651b77375fc735aedef3df8820c8b" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.544125 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" event={"ID":"c5f7a6cf-b540-4f93-b633-481d62ebc626","Type":"ContainerDied","Data":"3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48"} Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.545309 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-cssbq" event={"ID":"c5f7a6cf-b540-4f93-b633-481d62ebc626","Type":"ContainerDied","Data":"b2fd77c34b05e9a60dc775e8fdbc875aa047cf9e7ab5a40fd08f9815a9cdab3f"} Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.545438 4684 scope.go:117] "RemoveContainer" containerID="3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.556898 4684 generic.go:334] "Generic (PLEG): container finished" podID="a37b014b-6f8d-4179-be35-4896b00e0aec" containerID="2f73b7ee5af33ba26b7cb89cfe16400e89ad62c09e6adfe48a3bdf61771901a4" exitCode=0 Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.556948 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerDied","Data":"2f73b7ee5af33ba26b7cb89cfe16400e89ad62c09e6adfe48a3bdf61771901a4"} Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.571675 4684 scope.go:117] "RemoveContainer" containerID="2f73b7ee5af33ba26b7cb89cfe16400e89ad62c09e6adfe48a3bdf61771901a4" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.612958 4684 scope.go:117] "RemoveContainer" containerID="3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48" Jan 21 10:22:22 crc kubenswrapper[4684]: E0121 10:22:22.618101 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48\": container with ID starting with 3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48 not found: ID does not exist" containerID="3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.618157 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48"} err="failed to get container status \"3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48\": rpc error: code = NotFound desc = could not find container \"3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48\": container with ID starting with 3f90d59b787e6bb199531391d36825e49a59aea529d49f50c3625751d5bcdd48 not found: ID does not exist" Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.659413 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-cssbq"] Jan 21 10:22:22 crc kubenswrapper[4684]: I0121 10:22:22.670657 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-cssbq"] Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.565607 4684 generic.go:334] "Generic (PLEG): container finished" podID="6ea3f691-f7ad-4d69-97ef-3427114b483b" containerID="606119a6e8a19cd993c1cf879e9c6f7794e777dcf78ac11e514b1d55a29809e3" exitCode=0 Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.565679 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerDied","Data":"606119a6e8a19cd993c1cf879e9c6f7794e777dcf78ac11e514b1d55a29809e3"} Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.566238 4684 scope.go:117] "RemoveContainer" containerID="606119a6e8a19cd993c1cf879e9c6f7794e777dcf78ac11e514b1d55a29809e3" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.569216 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerStarted","Data":"30b4f3f714e1ba1d2ac089350394ad4ad0611204a642ebea3d70bc60cc180d0f"} Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.573165 4684 generic.go:334] "Generic (PLEG): container finished" podID="a606d589-c44b-4751-b3ff-54b86dd83209" containerID="2255e03e59b145a3619a1e0d252c95d91abccb1b2c7650c2d5c8cc409b30458e" exitCode=0 Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.573209 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerDied","Data":"2255e03e59b145a3619a1e0d252c95d91abccb1b2c7650c2d5c8cc409b30458e"} Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.573845 4684 scope.go:117] "RemoveContainer" containerID="2255e03e59b145a3619a1e0d252c95d91abccb1b2c7650c2d5c8cc409b30458e" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.793899 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-h996c"] Jan 21 10:22:23 crc kubenswrapper[4684]: E0121 10:22:23.794418 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f7a6cf-b540-4f93-b633-481d62ebc626" containerName="default-interconnect" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.794431 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f7a6cf-b540-4f93-b633-481d62ebc626" containerName="default-interconnect" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.794540 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f7a6cf-b540-4f93-b633-481d62ebc626" containerName="default-interconnect" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.794934 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.797352 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.797457 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-6959v" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.798280 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.798553 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.798610 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.798808 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.798859 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.808951 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-h996c"] Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944253 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944320 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944408 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/199561ac-4446-4757-99b3-d9be6b135398-sasl-config\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944454 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6sxs\" (UniqueName: \"kubernetes.io/projected/199561ac-4446-4757-99b3-d9be6b135398-kube-api-access-r6sxs\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944503 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944543 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-sasl-users\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:23 crc kubenswrapper[4684]: I0121 10:22:23.944570 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045527 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045606 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/199561ac-4446-4757-99b3-d9be6b135398-sasl-config\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045635 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6sxs\" (UniqueName: \"kubernetes.io/projected/199561ac-4446-4757-99b3-d9be6b135398-kube-api-access-r6sxs\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045676 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045706 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-sasl-users\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045726 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.045755 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.047574 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/199561ac-4446-4757-99b3-d9be6b135398-sasl-config\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.053589 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.055714 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.055924 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-sasl-users\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.056203 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.060079 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/199561ac-4446-4757-99b3-d9be6b135398-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.064391 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6sxs\" (UniqueName: \"kubernetes.io/projected/199561ac-4446-4757-99b3-d9be6b135398-kube-api-access-r6sxs\") pod \"default-interconnect-68864d46cb-h996c\" (UID: \"199561ac-4446-4757-99b3-d9be6b135398\") " pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.110839 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-h996c" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.523867 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f7a6cf-b540-4f93-b633-481d62ebc626" path="/var/lib/kubelet/pods/c5f7a6cf-b540-4f93-b633-481d62ebc626/volumes" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.585035 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerStarted","Data":"f1b5c30f405196cc78ad741d42ece63187637db736fbdf451ca94bb07e1bac40"} Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.587845 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerStarted","Data":"54eecdbd2a3099e2292cb569ab8196c7d2e0ec1a4adb7b5309e7bb2cc5dc87c7"} Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.590645 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerStarted","Data":"5bab1e1afe0ecbccf9e26e814a67155fb17392e0bce31c4fd84a211893b84172"} Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.593705 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerStarted","Data":"31439a90d1827c0c1901c96580017b6de80850405ab1b5d7d86741a04d79bfbf"} Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.600248 4684 generic.go:334] "Generic (PLEG): container finished" podID="a37b014b-6f8d-4179-be35-4896b00e0aec" containerID="30b4f3f714e1ba1d2ac089350394ad4ad0611204a642ebea3d70bc60cc180d0f" exitCode=0 Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.600292 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerDied","Data":"30b4f3f714e1ba1d2ac089350394ad4ad0611204a642ebea3d70bc60cc180d0f"} Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.600328 4684 scope.go:117] "RemoveContainer" containerID="2f73b7ee5af33ba26b7cb89cfe16400e89ad62c09e6adfe48a3bdf61771901a4" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.600840 4684 scope.go:117] "RemoveContainer" containerID="30b4f3f714e1ba1d2ac089350394ad4ad0611204a642ebea3d70bc60cc180d0f" Jan 21 10:22:24 crc kubenswrapper[4684]: E0121 10:22:24.601021 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_service-telemetry(a37b014b-6f8d-4179-be35-4896b00e0aec)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" podUID="a37b014b-6f8d-4179-be35-4896b00e0aec" Jan 21 10:22:24 crc kubenswrapper[4684]: I0121 10:22:24.645827 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-h996c"] Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.609689 4684 generic.go:334] "Generic (PLEG): container finished" podID="a606d589-c44b-4751-b3ff-54b86dd83209" containerID="f1b5c30f405196cc78ad741d42ece63187637db736fbdf451ca94bb07e1bac40" exitCode=0 Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.609776 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerDied","Data":"f1b5c30f405196cc78ad741d42ece63187637db736fbdf451ca94bb07e1bac40"} Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.610131 4684 scope.go:117] "RemoveContainer" containerID="2255e03e59b145a3619a1e0d252c95d91abccb1b2c7650c2d5c8cc409b30458e" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.610635 4684 scope.go:117] "RemoveContainer" containerID="f1b5c30f405196cc78ad741d42ece63187637db736fbdf451ca94bb07e1bac40" Jan 21 10:22:25 crc kubenswrapper[4684]: E0121 10:22:25.610839 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2_service-telemetry(a606d589-c44b-4751-b3ff-54b86dd83209)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" podUID="a606d589-c44b-4751-b3ff-54b86dd83209" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.612910 4684 generic.go:334] "Generic (PLEG): container finished" podID="e2618a9c-cdc6-48e5-8b8e-eeb451329b9c" containerID="54eecdbd2a3099e2292cb569ab8196c7d2e0ec1a4adb7b5309e7bb2cc5dc87c7" exitCode=0 Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.612965 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerDied","Data":"54eecdbd2a3099e2292cb569ab8196c7d2e0ec1a4adb7b5309e7bb2cc5dc87c7"} Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.613554 4684 scope.go:117] "RemoveContainer" containerID="54eecdbd2a3099e2292cb569ab8196c7d2e0ec1a4adb7b5309e7bb2cc5dc87c7" Jan 21 10:22:25 crc kubenswrapper[4684]: E0121 10:22:25.613751 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_service-telemetry(e2618a9c-cdc6-48e5-8b8e-eeb451329b9c)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" podUID="e2618a9c-cdc6-48e5-8b8e-eeb451329b9c" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.615660 4684 generic.go:334] "Generic (PLEG): container finished" podID="7af3e6f3-abfd-489c-9d2b-2c1469076565" containerID="5bab1e1afe0ecbccf9e26e814a67155fb17392e0bce31c4fd84a211893b84172" exitCode=0 Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.615717 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerDied","Data":"5bab1e1afe0ecbccf9e26e814a67155fb17392e0bce31c4fd84a211893b84172"} Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.616006 4684 scope.go:117] "RemoveContainer" containerID="5bab1e1afe0ecbccf9e26e814a67155fb17392e0bce31c4fd84a211893b84172" Jan 21 10:22:25 crc kubenswrapper[4684]: E0121 10:22:25.616164 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5_service-telemetry(7af3e6f3-abfd-489c-9d2b-2c1469076565)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" podUID="7af3e6f3-abfd-489c-9d2b-2c1469076565" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.618220 4684 generic.go:334] "Generic (PLEG): container finished" podID="6ea3f691-f7ad-4d69-97ef-3427114b483b" containerID="31439a90d1827c0c1901c96580017b6de80850405ab1b5d7d86741a04d79bfbf" exitCode=0 Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.618282 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerDied","Data":"31439a90d1827c0c1901c96580017b6de80850405ab1b5d7d86741a04d79bfbf"} Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.619735 4684 scope.go:117] "RemoveContainer" containerID="31439a90d1827c0c1901c96580017b6de80850405ab1b5d7d86741a04d79bfbf" Jan 21 10:22:25 crc kubenswrapper[4684]: E0121 10:22:25.619967 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_service-telemetry(6ea3f691-f7ad-4d69-97ef-3427114b483b)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" podUID="6ea3f691-f7ad-4d69-97ef-3427114b483b" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.620962 4684 scope.go:117] "RemoveContainer" containerID="30b4f3f714e1ba1d2ac089350394ad4ad0611204a642ebea3d70bc60cc180d0f" Jan 21 10:22:25 crc kubenswrapper[4684]: E0121 10:22:25.621100 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_service-telemetry(a37b014b-6f8d-4179-be35-4896b00e0aec)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" podUID="a37b014b-6f8d-4179-be35-4896b00e0aec" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.625563 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-h996c" event={"ID":"199561ac-4446-4757-99b3-d9be6b135398","Type":"ContainerStarted","Data":"8cc62acac91c76251d06c82c7b5d9e0eadc764af6cfdca3c856652c6e93fbae1"} Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.625614 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-h996c" event={"ID":"199561ac-4446-4757-99b3-d9be6b135398","Type":"ContainerStarted","Data":"c824c40189cae67d2e646257585be4cbe662bdeab97aa9a21c28a423f1d7b40e"} Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.643253 4684 scope.go:117] "RemoveContainer" containerID="bbe2e111681af328bf7e340edde7ee25c85651b77375fc735aedef3df8820c8b" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.688404 4684 scope.go:117] "RemoveContainer" containerID="14cb292f8d2c04639ef0bb44c9df4cba45df2c145fac2298875eda2c5cfe120d" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.778292 4684 scope.go:117] "RemoveContainer" containerID="606119a6e8a19cd993c1cf879e9c6f7794e777dcf78ac11e514b1d55a29809e3" Jan 21 10:22:25 crc kubenswrapper[4684]: I0121 10:22:25.837069 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-h996c" podStartSLOduration=4.837052675 podStartE2EDuration="4.837052675s" podCreationTimestamp="2026-01-21 10:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:22:25.787209491 +0000 UTC m=+983.545292458" watchObservedRunningTime="2026-01-21 10:22:25.837052675 +0000 UTC m=+983.595135642" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.007411 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.008485 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.010544 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.013850 4684 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.020631 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.095776 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2vcs\" (UniqueName: \"kubernetes.io/projected/fdca0836-0709-42a3-9fe5-78e3913422aa-kube-api-access-k2vcs\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.096029 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/fdca0836-0709-42a3-9fe5-78e3913422aa-qdr-test-config\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.096247 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/fdca0836-0709-42a3-9fe5-78e3913422aa-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.197925 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/fdca0836-0709-42a3-9fe5-78e3913422aa-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.198021 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2vcs\" (UniqueName: \"kubernetes.io/projected/fdca0836-0709-42a3-9fe5-78e3913422aa-kube-api-access-k2vcs\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.198091 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/fdca0836-0709-42a3-9fe5-78e3913422aa-qdr-test-config\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.198915 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/fdca0836-0709-42a3-9fe5-78e3913422aa-qdr-test-config\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.211065 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/fdca0836-0709-42a3-9fe5-78e3913422aa-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.214902 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2vcs\" (UniqueName: \"kubernetes.io/projected/fdca0836-0709-42a3-9fe5-78e3913422aa-kube-api-access-k2vcs\") pod \"qdr-test\" (UID: \"fdca0836-0709-42a3-9fe5-78e3913422aa\") " pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.331185 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.549368 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 10:22:27 crc kubenswrapper[4684]: I0121 10:22:27.646810 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"fdca0836-0709-42a3-9fe5-78e3913422aa","Type":"ContainerStarted","Data":"ada7f29c3131f330ac491fb93b0ebf5c3ee2c8a698f77e408e9407c1460cd2e9"} Jan 21 10:22:36 crc kubenswrapper[4684]: I0121 10:22:36.514914 4684 scope.go:117] "RemoveContainer" containerID="30b4f3f714e1ba1d2ac089350394ad4ad0611204a642ebea3d70bc60cc180d0f" Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.302776 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.303347 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.303440 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.304461 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f4462918383bc467ad7f03eeec652df2cace8658f28e603ce544d6d7137b741"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.304528 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://6f4462918383bc467ad7f03eeec652df2cace8658f28e603ce544d6d7137b741" gracePeriod=600 Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.515310 4684 scope.go:117] "RemoveContainer" containerID="31439a90d1827c0c1901c96580017b6de80850405ab1b5d7d86741a04d79bfbf" Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.746354 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"fdca0836-0709-42a3-9fe5-78e3913422aa","Type":"ContainerStarted","Data":"27b3181316d218dd64e79845b723caa978c822df969736080666e95b6bbe80c6"} Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.749467 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4" event={"ID":"a37b014b-6f8d-4179-be35-4896b00e0aec","Type":"ContainerStarted","Data":"f3f2d240f05789bf2fbe1042fdea4284f9e5a9884f80e17b812c4c29beb5eb0d"} Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.752934 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="6f4462918383bc467ad7f03eeec652df2cace8658f28e603ce544d6d7137b741" exitCode=0 Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.752975 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"6f4462918383bc467ad7f03eeec652df2cace8658f28e603ce544d6d7137b741"} Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.753011 4684 scope.go:117] "RemoveContainer" containerID="be7cd103ac0b509678b75cd5e797eb3c7c476dcd51d6bf722a679b736d58aab7" Jan 21 10:22:37 crc kubenswrapper[4684]: I0121 10:22:37.767611 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.646526396 podStartE2EDuration="11.76759703s" podCreationTimestamp="2026-01-21 10:22:26 +0000 UTC" firstStartedPulling="2026-01-21 10:22:27.557982729 +0000 UTC m=+985.316065696" lastFinishedPulling="2026-01-21 10:22:36.679053363 +0000 UTC m=+994.437136330" observedRunningTime="2026-01-21 10:22:37.762134512 +0000 UTC m=+995.520217479" watchObservedRunningTime="2026-01-21 10:22:37.76759703 +0000 UTC m=+995.525679997" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.045897 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-mhp4b"] Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.046986 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.049060 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.049165 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.049319 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.049660 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.051267 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.059845 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-mhp4b"] Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.062204 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.180731 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-publisher\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.181452 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.181562 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-config\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.181692 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-healthcheck-log\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.181790 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.181991 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vcgs\" (UniqueName: \"kubernetes.io/projected/40476f9d-b18f-40ca-ac65-eac8a8ac1639-kube-api-access-9vcgs\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.182091 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-sensubility-config\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.283536 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vcgs\" (UniqueName: \"kubernetes.io/projected/40476f9d-b18f-40ca-ac65-eac8a8ac1639-kube-api-access-9vcgs\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.283916 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-sensubility-config\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.284076 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-publisher\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.284562 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.285434 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.285237 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-publisher\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.285316 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-sensubility-config\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.285458 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-config\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.285798 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-healthcheck-log\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.286646 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.286578 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-healthcheck-log\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.286061 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-config\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.287434 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.307238 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vcgs\" (UniqueName: \"kubernetes.io/projected/40476f9d-b18f-40ca-ac65-eac8a8ac1639-kube-api-access-9vcgs\") pod \"stf-smoketest-smoke1-mhp4b\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.380422 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.425642 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.427113 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.442080 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.514577 4684 scope.go:117] "RemoveContainer" containerID="5bab1e1afe0ecbccf9e26e814a67155fb17392e0bce31c4fd84a211893b84172" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.515618 4684 scope.go:117] "RemoveContainer" containerID="54eecdbd2a3099e2292cb569ab8196c7d2e0ec1a4adb7b5309e7bb2cc5dc87c7" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.598340 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flm6g\" (UniqueName: \"kubernetes.io/projected/160a3579-c253-4eef-bc2f-1b03bdb3d21a-kube-api-access-flm6g\") pod \"curl\" (UID: \"160a3579-c253-4eef-bc2f-1b03bdb3d21a\") " pod="service-telemetry/curl" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.699430 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flm6g\" (UniqueName: \"kubernetes.io/projected/160a3579-c253-4eef-bc2f-1b03bdb3d21a-kube-api-access-flm6g\") pod \"curl\" (UID: \"160a3579-c253-4eef-bc2f-1b03bdb3d21a\") " pod="service-telemetry/curl" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.729100 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flm6g\" (UniqueName: \"kubernetes.io/projected/160a3579-c253-4eef-bc2f-1b03bdb3d21a-kube-api-access-flm6g\") pod \"curl\" (UID: \"160a3579-c253-4eef-bc2f-1b03bdb3d21a\") " pod="service-telemetry/curl" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.760823 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"47abcdb0c669edba9be769bd0f81dd5ed365b01f4428fe3808be955facfe8deb"} Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.763495 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx" event={"ID":"6ea3f691-f7ad-4d69-97ef-3427114b483b","Type":"ContainerStarted","Data":"36875f9db77729a3cc0e835b0353347781c7c32067809d5a8f540b29234f6c40"} Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.814276 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:22:38 crc kubenswrapper[4684]: I0121 10:22:38.904024 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-mhp4b"] Jan 21 10:22:39 crc kubenswrapper[4684]: I0121 10:22:39.077275 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 10:22:39 crc kubenswrapper[4684]: I0121 10:22:39.514046 4684 scope.go:117] "RemoveContainer" containerID="f1b5c30f405196cc78ad741d42ece63187637db736fbdf451ca94bb07e1bac40" Jan 21 10:22:39 crc kubenswrapper[4684]: I0121 10:22:39.772241 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" event={"ID":"40476f9d-b18f-40ca-ac65-eac8a8ac1639","Type":"ContainerStarted","Data":"4c81f7ac17a03498d45895cfccf9a6ec414021aa679f24293eb30f15bfdbdfa8"} Jan 21 10:22:39 crc kubenswrapper[4684]: I0121 10:22:39.775029 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-6859n" event={"ID":"e2618a9c-cdc6-48e5-8b8e-eeb451329b9c","Type":"ContainerStarted","Data":"774588eb8d84bb7b4990e541d9cb11332f0ec1892abda98aaf19ca8ce7786261"} Jan 21 10:22:39 crc kubenswrapper[4684]: I0121 10:22:39.779664 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5" event={"ID":"7af3e6f3-abfd-489c-9d2b-2c1469076565","Type":"ContainerStarted","Data":"8a47d2857e69812d31ef9b29b73c537d2f08fe0d639bd658b62dde930906b357"} Jan 21 10:22:39 crc kubenswrapper[4684]: I0121 10:22:39.782269 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"160a3579-c253-4eef-bc2f-1b03bdb3d21a","Type":"ContainerStarted","Data":"551d4df7e74fc98280ad764271949ebb6a892fb01eadcb31caaaadaa9c9cc9e5"} Jan 21 10:22:40 crc kubenswrapper[4684]: I0121 10:22:40.808407 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2" event={"ID":"a606d589-c44b-4751-b3ff-54b86dd83209","Type":"ContainerStarted","Data":"feeed2e5560eab2dbd89d594f788597da6f9d58bf9f62a6ecf0abe92acbd5882"} Jan 21 10:22:41 crc kubenswrapper[4684]: I0121 10:22:41.819310 4684 generic.go:334] "Generic (PLEG): container finished" podID="160a3579-c253-4eef-bc2f-1b03bdb3d21a" containerID="c72201f1438468285eb7c9c38420c52b231c93da94c706b52626dcf27b0dad1a" exitCode=0 Jan 21 10:22:41 crc kubenswrapper[4684]: I0121 10:22:41.819378 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"160a3579-c253-4eef-bc2f-1b03bdb3d21a","Type":"ContainerDied","Data":"c72201f1438468285eb7c9c38420c52b231c93da94c706b52626dcf27b0dad1a"} Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.111941 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.178523 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flm6g\" (UniqueName: \"kubernetes.io/projected/160a3579-c253-4eef-bc2f-1b03bdb3d21a-kube-api-access-flm6g\") pod \"160a3579-c253-4eef-bc2f-1b03bdb3d21a\" (UID: \"160a3579-c253-4eef-bc2f-1b03bdb3d21a\") " Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.187175 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160a3579-c253-4eef-bc2f-1b03bdb3d21a-kube-api-access-flm6g" (OuterVolumeSpecName: "kube-api-access-flm6g") pod "160a3579-c253-4eef-bc2f-1b03bdb3d21a" (UID: "160a3579-c253-4eef-bc2f-1b03bdb3d21a"). InnerVolumeSpecName "kube-api-access-flm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.491868 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flm6g\" (UniqueName: \"kubernetes.io/projected/160a3579-c253-4eef-bc2f-1b03bdb3d21a-kube-api-access-flm6g\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.497888 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_160a3579-c253-4eef-bc2f-1b03bdb3d21a/curl/0.log" Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.757497 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-zbs45_e1a540ef-a29d-4944-b473-16c7efe8d573/prometheus-webhook-snmp/0.log" Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.841230 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"160a3579-c253-4eef-bc2f-1b03bdb3d21a","Type":"ContainerDied","Data":"551d4df7e74fc98280ad764271949ebb6a892fb01eadcb31caaaadaa9c9cc9e5"} Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.841753 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="551d4df7e74fc98280ad764271949ebb6a892fb01eadcb31caaaadaa9c9cc9e5" Jan 21 10:22:43 crc kubenswrapper[4684]: I0121 10:22:43.841331 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 10:22:53 crc kubenswrapper[4684]: E0121 10:22:53.156580 4684 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-collectd:current-tripleo" Jan 21 10:22:53 crc kubenswrapper[4684]: E0121 10:22:53.157111 4684 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-collectd,Image:quay.io/tripleomastercentos9/openstack-collectd:current-tripleo,Command:[/smoketest_collectd_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:vDW9etyfUa00SvEpBA2czs85,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc2ODk5NDU0MiwiaWF0IjoxNzY4OTkwOTQyLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiJhYzgxOTRkYy1iYzU2LTQ1MzYtYTJhNC0zZTAwNDFhYjg3ZmYiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6IjgwMTg4MGRlLTc4Y2UtNGEwMS05YTMxLWM2NWI0NDM3NGUxZSJ9fSwibmJmIjoxNzY4OTkwOTQyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.o4mU5Ct2VuegGIsR-ClWpV-PlbqN-U8G7aJO0dpDKXx_bRMFszUcO1s9xIvzW-0C9YSzDr-Ptk2K8FhypWB9GT9EIcpPGpoU-JW_Fl6LnD2d8LLyhvm6TG4d8oBjec4iHQaWTUPrbhZ9SqaCLIMCVuFco60njcWekvyssVGEXHUVZr0Lsa5jE3PoEAh64x-2hENKPkX0Zfjs4fKBMqr2rMgEcnje-U4z0YYHPsj1KOhj3lCx3vSWP6AcWzXBk-7R6FOn6jxOvFjkPhiwYpJbRhk8mErgtIE4Ny2D9q3dfEbqWf-98fDkgpFMth-Hg2cliq4SauWlCXl5aczGGTyPo9zaR74v7074DuTBRHkcaXx0PMpyNMYbj3SkPncBBas8oe1wIfE9bEJGiLGD9bCHnXjMtqD0_yjFEdfmyon2erAHmpNXZEOnHEcKYmpsuKkYwFDTS306UVFfc8xD1DR-rEYEeNm8hy8EMO5yawpjC_i8N3yjY9n1rmVg7mPo57SSuNEp5RA6KIz0aXFFh_AIn_tyAxvylhMiRuLxMtv63RGBD4C-XL59IlgdcxOhsgQcCrL1KFuQzf4ao8naItmGz9w3nHVCPkvZgcPK4XpweSY8nvJMpggomTh4zaMrFG-Q5fiBsx90j7sZu5aaJy31ewdEzDjK1TF8hCMhgfNT4c8,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:collectd-config,ReadOnly:false,MountPath:/etc/minimal-collectd.conf.template,SubPath:minimal-collectd.conf.template,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sensubility-config,ReadOnly:false,MountPath:/etc/collectd-sensubility.conf,SubPath:collectd-sensubility.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:healthcheck-log,ReadOnly:false,MountPath:/healthcheck.log,SubPath:healthcheck.log,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:collectd-entrypoint-script,ReadOnly:false,MountPath:/smoketest_collectd_entrypoint.sh,SubPath:smoketest_collectd_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vcgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-mhp4b_service-telemetry(40476f9d-b18f-40ca-ac65-eac8a8ac1639): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:22:58 crc kubenswrapper[4684]: E0121 10:22:58.441141 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" Jan 21 10:22:58 crc kubenswrapper[4684]: I0121 10:22:58.958337 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" event={"ID":"40476f9d-b18f-40ca-ac65-eac8a8ac1639","Type":"ContainerStarted","Data":"0b446d6a68a37a1d4c539a2f8bca7db74494b952784da5979761df8c7671b68f"} Jan 21 10:22:58 crc kubenswrapper[4684]: E0121 10:22:58.962332 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-collectd:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" Jan 21 10:22:59 crc kubenswrapper[4684]: E0121 10:22:59.968202 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-collectd:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" Jan 21 10:23:13 crc kubenswrapper[4684]: I0121 10:23:13.075226 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" event={"ID":"40476f9d-b18f-40ca-ac65-eac8a8ac1639","Type":"ContainerStarted","Data":"a4db26df484142f3d5a1bae023b4413c15af710480fd67a52f4d4c3eb1b27d8b"} Jan 21 10:23:13 crc kubenswrapper[4684]: I0121 10:23:13.103354 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" podStartSLOduration=1.757594676 podStartE2EDuration="35.103337408s" podCreationTimestamp="2026-01-21 10:22:38 +0000 UTC" firstStartedPulling="2026-01-21 10:22:38.921321984 +0000 UTC m=+996.679404951" lastFinishedPulling="2026-01-21 10:23:12.267064706 +0000 UTC m=+1030.025147683" observedRunningTime="2026-01-21 10:23:13.099810139 +0000 UTC m=+1030.857893116" watchObservedRunningTime="2026-01-21 10:23:13.103337408 +0000 UTC m=+1030.861420375" Jan 21 10:23:13 crc kubenswrapper[4684]: I0121 10:23:13.951045 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-zbs45_e1a540ef-a29d-4944-b473-16c7efe8d573/prometheus-webhook-snmp/0.log" Jan 21 10:23:31 crc kubenswrapper[4684]: I0121 10:23:31.224135 4684 generic.go:334] "Generic (PLEG): container finished" podID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerID="0b446d6a68a37a1d4c539a2f8bca7db74494b952784da5979761df8c7671b68f" exitCode=0 Jan 21 10:23:31 crc kubenswrapper[4684]: I0121 10:23:31.224220 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" event={"ID":"40476f9d-b18f-40ca-ac65-eac8a8ac1639","Type":"ContainerDied","Data":"0b446d6a68a37a1d4c539a2f8bca7db74494b952784da5979761df8c7671b68f"} Jan 21 10:23:31 crc kubenswrapper[4684]: I0121 10:23:31.225340 4684 scope.go:117] "RemoveContainer" containerID="0b446d6a68a37a1d4c539a2f8bca7db74494b952784da5979761df8c7671b68f" Jan 21 10:23:46 crc kubenswrapper[4684]: I0121 10:23:46.352961 4684 generic.go:334] "Generic (PLEG): container finished" podID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerID="a4db26df484142f3d5a1bae023b4413c15af710480fd67a52f4d4c3eb1b27d8b" exitCode=0 Jan 21 10:23:46 crc kubenswrapper[4684]: I0121 10:23:46.353069 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" event={"ID":"40476f9d-b18f-40ca-ac65-eac8a8ac1639","Type":"ContainerDied","Data":"a4db26df484142f3d5a1bae023b4413c15af710480fd67a52f4d4c3eb1b27d8b"} Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.652590 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.814658 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-healthcheck-log\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.814713 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-entrypoint-script\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.814743 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-sensubility-config\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.814807 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-entrypoint-script\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.814843 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-config\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.814918 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-publisher\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.815009 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vcgs\" (UniqueName: \"kubernetes.io/projected/40476f9d-b18f-40ca-ac65-eac8a8ac1639-kube-api-access-9vcgs\") pod \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\" (UID: \"40476f9d-b18f-40ca-ac65-eac8a8ac1639\") " Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.821792 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40476f9d-b18f-40ca-ac65-eac8a8ac1639-kube-api-access-9vcgs" (OuterVolumeSpecName: "kube-api-access-9vcgs") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "kube-api-access-9vcgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.837326 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.840386 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.844808 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.846064 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.854311 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.860632 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "40476f9d-b18f-40ca-ac65-eac8a8ac1639" (UID: "40476f9d-b18f-40ca-ac65-eac8a8ac1639"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.916973 4684 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.917028 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vcgs\" (UniqueName: \"kubernetes.io/projected/40476f9d-b18f-40ca-ac65-eac8a8ac1639-kube-api-access-9vcgs\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.917039 4684 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.917048 4684 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.917057 4684 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.917065 4684 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:47 crc kubenswrapper[4684]: I0121 10:23:47.917074 4684 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/40476f9d-b18f-40ca-ac65-eac8a8ac1639-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:23:48 crc kubenswrapper[4684]: I0121 10:23:48.378855 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" event={"ID":"40476f9d-b18f-40ca-ac65-eac8a8ac1639","Type":"ContainerDied","Data":"4c81f7ac17a03498d45895cfccf9a6ec414021aa679f24293eb30f15bfdbdfa8"} Jan 21 10:23:48 crc kubenswrapper[4684]: I0121 10:23:48.378933 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c81f7ac17a03498d45895cfccf9a6ec414021aa679f24293eb30f15bfdbdfa8" Jan 21 10:23:48 crc kubenswrapper[4684]: I0121 10:23:48.378992 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-mhp4b" Jan 21 10:23:49 crc kubenswrapper[4684]: I0121 10:23:49.657069 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-mhp4b_40476f9d-b18f-40ca-ac65-eac8a8ac1639/smoketest-collectd/0.log" Jan 21 10:23:49 crc kubenswrapper[4684]: I0121 10:23:49.901175 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-mhp4b_40476f9d-b18f-40ca-ac65-eac8a8ac1639/smoketest-ceilometer/0.log" Jan 21 10:23:50 crc kubenswrapper[4684]: I0121 10:23:50.139143 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-h996c_199561ac-4446-4757-99b3-d9be6b135398/default-interconnect/0.log" Jan 21 10:23:50 crc kubenswrapper[4684]: I0121 10:23:50.406335 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_e2618a9c-cdc6-48e5-8b8e-eeb451329b9c/bridge/2.log" Jan 21 10:23:50 crc kubenswrapper[4684]: I0121 10:23:50.672683 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_e2618a9c-cdc6-48e5-8b8e-eeb451329b9c/sg-core/0.log" Jan 21 10:23:50 crc kubenswrapper[4684]: I0121 10:23:50.906376 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2_a606d589-c44b-4751-b3ff-54b86dd83209/bridge/2.log" Jan 21 10:23:51 crc kubenswrapper[4684]: I0121 10:23:51.130509 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2_a606d589-c44b-4751-b3ff-54b86dd83209/sg-core/0.log" Jan 21 10:23:51 crc kubenswrapper[4684]: I0121 10:23:51.390662 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_a37b014b-6f8d-4179-be35-4896b00e0aec/bridge/2.log" Jan 21 10:23:51 crc kubenswrapper[4684]: I0121 10:23:51.620115 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_a37b014b-6f8d-4179-be35-4896b00e0aec/sg-core/0.log" Jan 21 10:23:51 crc kubenswrapper[4684]: I0121 10:23:51.839135 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5_7af3e6f3-abfd-489c-9d2b-2c1469076565/bridge/2.log" Jan 21 10:23:52 crc kubenswrapper[4684]: I0121 10:23:52.059170 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5_7af3e6f3-abfd-489c-9d2b-2c1469076565/sg-core/0.log" Jan 21 10:23:52 crc kubenswrapper[4684]: I0121 10:23:52.309670 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_6ea3f691-f7ad-4d69-97ef-3427114b483b/bridge/2.log" Jan 21 10:23:52 crc kubenswrapper[4684]: I0121 10:23:52.554146 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_6ea3f691-f7ad-4d69-97ef-3427114b483b/sg-core/0.log" Jan 21 10:23:55 crc kubenswrapper[4684]: I0121 10:23:55.596121 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-77c9d9f969-mbxq6_b8ec58ca-1a38-4f05-9394-90f7eac34be8/operator/0.log" Jan 21 10:23:55 crc kubenswrapper[4684]: I0121 10:23:55.866790 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fc166550-60f0-4fee-a249-db0689c07f60/prometheus/0.log" Jan 21 10:23:56 crc kubenswrapper[4684]: I0121 10:23:56.133145 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_03846613-371f-48c5-b48d-268666ac73fe/elasticsearch/0.log" Jan 21 10:23:56 crc kubenswrapper[4684]: I0121 10:23:56.359515 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-zbs45_e1a540ef-a29d-4944-b473-16c7efe8d573/prometheus-webhook-snmp/0.log" Jan 21 10:23:56 crc kubenswrapper[4684]: I0121 10:23:56.591471 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6e76ba58-56c8-4465-bcee-35a5b361608b/alertmanager/0.log" Jan 21 10:24:08 crc kubenswrapper[4684]: I0121 10:24:08.805692 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-68688768b9-92w7t_3cc1ef89-ee68-427f-bd4e-77d27c77f8c4/operator/0.log" Jan 21 10:24:12 crc kubenswrapper[4684]: I0121 10:24:12.121826 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-77c9d9f969-mbxq6_b8ec58ca-1a38-4f05-9394-90f7eac34be8/operator/0.log" Jan 21 10:24:12 crc kubenswrapper[4684]: I0121 10:24:12.409576 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_fdca0836-0709-42a3-9fe5-78e3913422aa/qdr/0.log" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.626374 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vs698/must-gather-vbgb9"] Jan 21 10:24:36 crc kubenswrapper[4684]: E0121 10:24:36.627130 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerName="smoketest-ceilometer" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627142 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerName="smoketest-ceilometer" Jan 21 10:24:36 crc kubenswrapper[4684]: E0121 10:24:36.627152 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerName="smoketest-collectd" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627158 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerName="smoketest-collectd" Jan 21 10:24:36 crc kubenswrapper[4684]: E0121 10:24:36.627178 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="160a3579-c253-4eef-bc2f-1b03bdb3d21a" containerName="curl" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627186 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="160a3579-c253-4eef-bc2f-1b03bdb3d21a" containerName="curl" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627291 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerName="smoketest-ceilometer" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627308 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="40476f9d-b18f-40ca-ac65-eac8a8ac1639" containerName="smoketest-collectd" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627318 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="160a3579-c253-4eef-bc2f-1b03bdb3d21a" containerName="curl" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.627990 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.629971 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vs698"/"kube-root-ca.crt" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.632353 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vs698"/"openshift-service-ca.crt" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.646503 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vs698/must-gather-vbgb9"] Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.747553 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4mr\" (UniqueName: \"kubernetes.io/projected/61cbabea-4b3e-46a0-819f-daf32a637f29-kube-api-access-wl4mr\") pod \"must-gather-vbgb9\" (UID: \"61cbabea-4b3e-46a0-819f-daf32a637f29\") " pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.747701 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/61cbabea-4b3e-46a0-819f-daf32a637f29-must-gather-output\") pod \"must-gather-vbgb9\" (UID: \"61cbabea-4b3e-46a0-819f-daf32a637f29\") " pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.848886 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/61cbabea-4b3e-46a0-819f-daf32a637f29-must-gather-output\") pod \"must-gather-vbgb9\" (UID: \"61cbabea-4b3e-46a0-819f-daf32a637f29\") " pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.849020 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl4mr\" (UniqueName: \"kubernetes.io/projected/61cbabea-4b3e-46a0-819f-daf32a637f29-kube-api-access-wl4mr\") pod \"must-gather-vbgb9\" (UID: \"61cbabea-4b3e-46a0-819f-daf32a637f29\") " pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.849315 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/61cbabea-4b3e-46a0-819f-daf32a637f29-must-gather-output\") pod \"must-gather-vbgb9\" (UID: \"61cbabea-4b3e-46a0-819f-daf32a637f29\") " pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.869878 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl4mr\" (UniqueName: \"kubernetes.io/projected/61cbabea-4b3e-46a0-819f-daf32a637f29-kube-api-access-wl4mr\") pod \"must-gather-vbgb9\" (UID: \"61cbabea-4b3e-46a0-819f-daf32a637f29\") " pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:36 crc kubenswrapper[4684]: I0121 10:24:36.946271 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vs698/must-gather-vbgb9" Jan 21 10:24:37 crc kubenswrapper[4684]: I0121 10:24:37.138333 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vs698/must-gather-vbgb9"] Jan 21 10:24:37 crc kubenswrapper[4684]: I0121 10:24:37.148228 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:24:37 crc kubenswrapper[4684]: I0121 10:24:37.302476 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:24:37 crc kubenswrapper[4684]: I0121 10:24:37.302578 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:24:37 crc kubenswrapper[4684]: I0121 10:24:37.761402 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vs698/must-gather-vbgb9" event={"ID":"61cbabea-4b3e-46a0-819f-daf32a637f29","Type":"ContainerStarted","Data":"6720daec2d24ae11194c1a0116b2d1a31b8685142e226465cfb9891e17103047"} Jan 21 10:24:46 crc kubenswrapper[4684]: I0121 10:24:46.822638 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vs698/must-gather-vbgb9" event={"ID":"61cbabea-4b3e-46a0-819f-daf32a637f29","Type":"ContainerStarted","Data":"ff8ae3b20fa3efd3c18b0704697a910e09f9d9ae4bbbc447e5ab75e649de04f4"} Jan 21 10:24:46 crc kubenswrapper[4684]: I0121 10:24:46.823175 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vs698/must-gather-vbgb9" event={"ID":"61cbabea-4b3e-46a0-819f-daf32a637f29","Type":"ContainerStarted","Data":"ac429af0dd7d0475729b435dcdfa8abd98123031083b056738ae6250fa3e7ee1"} Jan 21 10:24:46 crc kubenswrapper[4684]: I0121 10:24:46.841226 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vs698/must-gather-vbgb9" podStartSLOduration=2.360596045 podStartE2EDuration="10.841210382s" podCreationTimestamp="2026-01-21 10:24:36 +0000 UTC" firstStartedPulling="2026-01-21 10:24:37.148200102 +0000 UTC m=+1114.906283069" lastFinishedPulling="2026-01-21 10:24:45.628814439 +0000 UTC m=+1123.386897406" observedRunningTime="2026-01-21 10:24:46.836908899 +0000 UTC m=+1124.594991866" watchObservedRunningTime="2026-01-21 10:24:46.841210382 +0000 UTC m=+1124.599293339" Jan 21 10:24:54 crc kubenswrapper[4684]: I0121 10:24:54.930352 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-96b8h"] Jan 21 10:24:54 crc kubenswrapper[4684]: I0121 10:24:54.931840 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:24:54 crc kubenswrapper[4684]: I0121 10:24:54.954003 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-96b8h"] Jan 21 10:24:55 crc kubenswrapper[4684]: I0121 10:24:55.038459 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qs2b\" (UniqueName: \"kubernetes.io/projected/f0dfb990-f684-46ae-8f48-f58090e2f3cb-kube-api-access-8qs2b\") pod \"infrawatch-operators-96b8h\" (UID: \"f0dfb990-f684-46ae-8f48-f58090e2f3cb\") " pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:24:55 crc kubenswrapper[4684]: I0121 10:24:55.140313 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qs2b\" (UniqueName: \"kubernetes.io/projected/f0dfb990-f684-46ae-8f48-f58090e2f3cb-kube-api-access-8qs2b\") pod \"infrawatch-operators-96b8h\" (UID: \"f0dfb990-f684-46ae-8f48-f58090e2f3cb\") " pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:24:55 crc kubenswrapper[4684]: I0121 10:24:55.159856 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qs2b\" (UniqueName: \"kubernetes.io/projected/f0dfb990-f684-46ae-8f48-f58090e2f3cb-kube-api-access-8qs2b\") pod \"infrawatch-operators-96b8h\" (UID: \"f0dfb990-f684-46ae-8f48-f58090e2f3cb\") " pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:24:55 crc kubenswrapper[4684]: I0121 10:24:55.252591 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:24:55 crc kubenswrapper[4684]: I0121 10:24:55.510290 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-96b8h"] Jan 21 10:24:55 crc kubenswrapper[4684]: I0121 10:24:55.898955 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-96b8h" event={"ID":"f0dfb990-f684-46ae-8f48-f58090e2f3cb","Type":"ContainerStarted","Data":"3ad87c5553e8e1f2e7f8258db4614e384d82346bc7a5bca9fa94f8f395314228"} Jan 21 10:24:56 crc kubenswrapper[4684]: I0121 10:24:56.910699 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-96b8h" event={"ID":"f0dfb990-f684-46ae-8f48-f58090e2f3cb","Type":"ContainerStarted","Data":"b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59"} Jan 21 10:24:56 crc kubenswrapper[4684]: I0121 10:24:56.937008 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-96b8h" podStartSLOduration=2.460092504 podStartE2EDuration="2.936979037s" podCreationTimestamp="2026-01-21 10:24:54 +0000 UTC" firstStartedPulling="2026-01-21 10:24:55.510715864 +0000 UTC m=+1133.268798831" lastFinishedPulling="2026-01-21 10:24:55.987602407 +0000 UTC m=+1133.745685364" observedRunningTime="2026-01-21 10:24:56.929874466 +0000 UTC m=+1134.687957463" watchObservedRunningTime="2026-01-21 10:24:56.936979037 +0000 UTC m=+1134.695062044" Jan 21 10:24:58 crc kubenswrapper[4684]: I0121 10:24:58.874014 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-64j7q_ecc3b392-1d71-4544-93ac-c93169931c41/control-plane-machine-set-operator/0.log" Jan 21 10:24:58 crc kubenswrapper[4684]: I0121 10:24:58.907929 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q9wnn_42c27350-86e3-4a02-9194-5dd24c297a12/kube-rbac-proxy/0.log" Jan 21 10:24:58 crc kubenswrapper[4684]: I0121 10:24:58.926917 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q9wnn_42c27350-86e3-4a02-9194-5dd24c297a12/machine-api-operator/0.log" Jan 21 10:25:04 crc kubenswrapper[4684]: I0121 10:25:04.252171 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-5m7tk_b06f9741-edea-445c-91e9-74f4a0c414b8/cert-manager-controller/0.log" Jan 21 10:25:04 crc kubenswrapper[4684]: I0121 10:25:04.271334 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-q67bv_0ad30b20-d9a7-411a-bd08-af23f6cef22a/cert-manager-cainjector/0.log" Jan 21 10:25:04 crc kubenswrapper[4684]: I0121 10:25:04.279768 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-drtkg_4eca5441-46b0-4247-bf5a-b8981782867a/cert-manager-webhook/0.log" Jan 21 10:25:05 crc kubenswrapper[4684]: I0121 10:25:05.252838 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:25:05 crc kubenswrapper[4684]: I0121 10:25:05.252943 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:25:05 crc kubenswrapper[4684]: I0121 10:25:05.299779 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:25:06 crc kubenswrapper[4684]: I0121 10:25:06.032589 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:25:06 crc kubenswrapper[4684]: I0121 10:25:06.080174 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-96b8h"] Jan 21 10:25:07 crc kubenswrapper[4684]: I0121 10:25:07.302041 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:25:07 crc kubenswrapper[4684]: I0121 10:25:07.302458 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:25:08 crc kubenswrapper[4684]: I0121 10:25:08.017691 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-96b8h" podUID="f0dfb990-f684-46ae-8f48-f58090e2f3cb" containerName="registry-server" containerID="cri-o://b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59" gracePeriod=2 Jan 21 10:25:08 crc kubenswrapper[4684]: I0121 10:25:08.400596 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:25:08 crc kubenswrapper[4684]: I0121 10:25:08.548046 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qs2b\" (UniqueName: \"kubernetes.io/projected/f0dfb990-f684-46ae-8f48-f58090e2f3cb-kube-api-access-8qs2b\") pod \"f0dfb990-f684-46ae-8f48-f58090e2f3cb\" (UID: \"f0dfb990-f684-46ae-8f48-f58090e2f3cb\") " Jan 21 10:25:08 crc kubenswrapper[4684]: I0121 10:25:08.558571 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0dfb990-f684-46ae-8f48-f58090e2f3cb-kube-api-access-8qs2b" (OuterVolumeSpecName: "kube-api-access-8qs2b") pod "f0dfb990-f684-46ae-8f48-f58090e2f3cb" (UID: "f0dfb990-f684-46ae-8f48-f58090e2f3cb"). InnerVolumeSpecName "kube-api-access-8qs2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:25:08 crc kubenswrapper[4684]: I0121 10:25:08.649522 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qs2b\" (UniqueName: \"kubernetes.io/projected/f0dfb990-f684-46ae-8f48-f58090e2f3cb-kube-api-access-8qs2b\") on node \"crc\" DevicePath \"\"" Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.027978 4684 generic.go:334] "Generic (PLEG): container finished" podID="f0dfb990-f684-46ae-8f48-f58090e2f3cb" containerID="b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59" exitCode=0 Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.028041 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-96b8h" event={"ID":"f0dfb990-f684-46ae-8f48-f58090e2f3cb","Type":"ContainerDied","Data":"b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59"} Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.028095 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-96b8h" event={"ID":"f0dfb990-f684-46ae-8f48-f58090e2f3cb","Type":"ContainerDied","Data":"3ad87c5553e8e1f2e7f8258db4614e384d82346bc7a5bca9fa94f8f395314228"} Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.028120 4684 scope.go:117] "RemoveContainer" containerID="b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59" Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.028186 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-96b8h" Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.054425 4684 scope.go:117] "RemoveContainer" containerID="b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59" Jan 21 10:25:09 crc kubenswrapper[4684]: E0121 10:25:09.055692 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59\": container with ID starting with b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59 not found: ID does not exist" containerID="b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59" Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.055728 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59"} err="failed to get container status \"b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59\": rpc error: code = NotFound desc = could not find container \"b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59\": container with ID starting with b5ca302098f8efb5262bfdb7df3bc20299db7d291abb49e26962c21286e40d59 not found: ID does not exist" Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.071991 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-96b8h"] Jan 21 10:25:09 crc kubenswrapper[4684]: I0121 10:25:09.079079 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-96b8h"] Jan 21 10:25:10 crc kubenswrapper[4684]: I0121 10:25:10.214344 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9gqzc_f2f1b01a-1863-4f20-892e-a4ac0f808d71/prometheus-operator/0.log" Jan 21 10:25:10 crc kubenswrapper[4684]: I0121 10:25:10.231646 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk_2669523c-e1d2-4b87-9e3e-f1e526c6ede5/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:10 crc kubenswrapper[4684]: I0121 10:25:10.246630 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7_394f40ff-afe2-471b-ae2d-6ba7ac2be402/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:10 crc kubenswrapper[4684]: I0121 10:25:10.266778 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-njtvc_319c1164-c464-40c9-a163-89cbc719fa56/operator/0.log" Jan 21 10:25:10 crc kubenswrapper[4684]: I0121 10:25:10.279484 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tgqvm_7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26/perses-operator/0.log" Jan 21 10:25:10 crc kubenswrapper[4684]: I0121 10:25:10.525060 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0dfb990-f684-46ae-8f48-f58090e2f3cb" path="/var/lib/kubelet/pods/f0dfb990-f684-46ae-8f48-f58090e2f3cb/volumes" Jan 21 10:25:15 crc kubenswrapper[4684]: I0121 10:25:15.945273 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4_1e79586b-d501-4fa0-9c2b-d612682f0c43/extract/0.log" Jan 21 10:25:15 crc kubenswrapper[4684]: I0121 10:25:15.953831 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4_1e79586b-d501-4fa0-9c2b-d612682f0c43/util/0.log" Jan 21 10:25:15 crc kubenswrapper[4684]: I0121 10:25:15.985432 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a944w4_1e79586b-d501-4fa0-9c2b-d612682f0c43/pull/0.log" Jan 21 10:25:15 crc kubenswrapper[4684]: I0121 10:25:15.995947 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br_7907315a-0307-4509-97b4-160bf055fac8/extract/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.003780 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br_7907315a-0307-4509-97b4-160bf055fac8/util/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.011908 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsv7br_7907315a-0307-4509-97b4-160bf055fac8/pull/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.023487 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw_51e6c957-888f-4a6f-a18a-d13b6d901c78/extract/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.028731 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw_51e6c957-888f-4a6f-a18a-d13b6d901c78/util/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.037350 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5etlmpw_51e6c957-888f-4a6f-a18a-d13b6d901c78/pull/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.050055 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq_061544f4-badf-4c7b-a325-e582fa4e2451/extract/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.060179 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq_061544f4-badf-4c7b-a325-e582fa4e2451/util/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.066672 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08q8hcq_061544f4-badf-4c7b-a325-e582fa4e2451/pull/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.320043 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-chnqm_7a860bc7-f569-4865-bba1-65aded1a7dac/registry-server/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.324906 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-chnqm_7a860bc7-f569-4865-bba1-65aded1a7dac/extract-utilities/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.335570 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-chnqm_7a860bc7-f569-4865-bba1-65aded1a7dac/extract-content/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.684791 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwnx5_db0f8c13-b2c9-405d-8a57-4a68439bb296/registry-server/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.689568 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwnx5_db0f8c13-b2c9-405d-8a57-4a68439bb296/extract-utilities/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.696143 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dwnx5_db0f8c13-b2c9-405d-8a57-4a68439bb296/extract-content/0.log" Jan 21 10:25:16 crc kubenswrapper[4684]: I0121 10:25:16.708868 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fpqww_8bc5222c-6074-4cbe-b442-7b41ed0ed363/marketplace-operator/0.log" Jan 21 10:25:17 crc kubenswrapper[4684]: I0121 10:25:17.053698 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mk9s5_e3bac8f1-34d3-4f17-853f-a8ccd424baef/registry-server/0.log" Jan 21 10:25:17 crc kubenswrapper[4684]: I0121 10:25:17.059534 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mk9s5_e3bac8f1-34d3-4f17-853f-a8ccd424baef/extract-utilities/0.log" Jan 21 10:25:17 crc kubenswrapper[4684]: I0121 10:25:17.064918 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mk9s5_e3bac8f1-34d3-4f17-853f-a8ccd424baef/extract-content/0.log" Jan 21 10:25:20 crc kubenswrapper[4684]: I0121 10:25:20.632177 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9gqzc_f2f1b01a-1863-4f20-892e-a4ac0f808d71/prometheus-operator/0.log" Jan 21 10:25:20 crc kubenswrapper[4684]: I0121 10:25:20.643705 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk_2669523c-e1d2-4b87-9e3e-f1e526c6ede5/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:20 crc kubenswrapper[4684]: I0121 10:25:20.663022 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7_394f40ff-afe2-471b-ae2d-6ba7ac2be402/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:20 crc kubenswrapper[4684]: I0121 10:25:20.686678 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-njtvc_319c1164-c464-40c9-a163-89cbc719fa56/operator/0.log" Jan 21 10:25:20 crc kubenswrapper[4684]: I0121 10:25:20.706622 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tgqvm_7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26/perses-operator/0.log" Jan 21 10:25:28 crc kubenswrapper[4684]: I0121 10:25:28.872112 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9gqzc_f2f1b01a-1863-4f20-892e-a4ac0f808d71/prometheus-operator/0.log" Jan 21 10:25:28 crc kubenswrapper[4684]: I0121 10:25:28.898530 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cdf696bf7-8txfk_2669523c-e1d2-4b87-9e3e-f1e526c6ede5/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:28 crc kubenswrapper[4684]: I0121 10:25:28.906848 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cdf696bf7-s2ht7_394f40ff-afe2-471b-ae2d-6ba7ac2be402/prometheus-operator-admission-webhook/0.log" Jan 21 10:25:28 crc kubenswrapper[4684]: I0121 10:25:28.924918 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-njtvc_319c1164-c464-40c9-a163-89cbc719fa56/operator/0.log" Jan 21 10:25:28 crc kubenswrapper[4684]: I0121 10:25:28.938303 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tgqvm_7a9202ce-c8d5-4fd6-a6c4-b670d0b14e26/perses-operator/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.046683 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-5m7tk_b06f9741-edea-445c-91e9-74f4a0c414b8/cert-manager-controller/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.058789 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-q67bv_0ad30b20-d9a7-411a-bd08-af23f6cef22a/cert-manager-cainjector/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.066945 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-drtkg_4eca5441-46b0-4247-bf5a-b8981782867a/cert-manager-webhook/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.499565 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-5m7tk_b06f9741-edea-445c-91e9-74f4a0c414b8/cert-manager-controller/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.509236 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-q67bv_0ad30b20-d9a7-411a-bd08-af23f6cef22a/cert-manager-cainjector/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.523057 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-drtkg_4eca5441-46b0-4247-bf5a-b8981782867a/cert-manager-webhook/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.912949 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-64j7q_ecc3b392-1d71-4544-93ac-c93169931c41/control-plane-machine-set-operator/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.923499 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q9wnn_42c27350-86e3-4a02-9194-5dd24c297a12/kube-rbac-proxy/0.log" Jan 21 10:25:29 crc kubenswrapper[4684]: I0121 10:25:29.929208 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q9wnn_42c27350-86e3-4a02-9194-5dd24c297a12/machine-api-operator/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.484501 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp_2c66d4da-7bde-43d6-af8d-957368c8ce4f/extract/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.494956 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp_2c66d4da-7bde-43d6-af8d-957368c8ce4f/util/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.506276 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_2ed151d2bd3e125f2453293dd520598a3bcb66264bbe0a9abec30f3a5evzwcp_2c66d4da-7bde-43d6-af8d-957368c8ce4f/pull/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.525721 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6e76ba58-56c8-4465-bcee-35a5b361608b/alertmanager/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.537277 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6e76ba58-56c8-4465-bcee-35a5b361608b/config-reloader/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.546975 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6e76ba58-56c8-4465-bcee-35a5b361608b/oauth-proxy/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.557261 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6e76ba58-56c8-4465-bcee-35a5b361608b/init-config-reloader/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.572716 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_160a3579-c253-4eef-bc2f-1b03bdb3d21a/curl/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.583569 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5_7af3e6f3-abfd-489c-9d2b-2c1469076565/bridge/2.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.583751 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5_7af3e6f3-abfd-489c-9d2b-2c1469076565/bridge/1.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.591531 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7fb5cbbd5d-2h8p5_7af3e6f3-abfd-489c-9d2b-2c1469076565/sg-core/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.612853 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_a37b014b-6f8d-4179-be35-4896b00e0aec/oauth-proxy/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.622633 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_a37b014b-6f8d-4179-be35-4896b00e0aec/bridge/1.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.623087 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_a37b014b-6f8d-4179-be35-4896b00e0aec/bridge/2.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.628809 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-2jrq4_a37b014b-6f8d-4179-be35-4896b00e0aec/sg-core/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.643169 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2_a606d589-c44b-4751-b3ff-54b86dd83209/bridge/2.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.643395 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2_a606d589-c44b-4751-b3ff-54b86dd83209/bridge/1.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.648606 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5bb5d855bf-vr8w2_a606d589-c44b-4751-b3ff-54b86dd83209/sg-core/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.663541 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_e2618a9c-cdc6-48e5-8b8e-eeb451329b9c/oauth-proxy/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.674434 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_e2618a9c-cdc6-48e5-8b8e-eeb451329b9c/bridge/2.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.676225 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_e2618a9c-cdc6-48e5-8b8e-eeb451329b9c/bridge/1.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.682665 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-6859n_e2618a9c-cdc6-48e5-8b8e-eeb451329b9c/sg-core/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.693574 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_6ea3f691-f7ad-4d69-97ef-3427114b483b/oauth-proxy/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.700153 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_6ea3f691-f7ad-4d69-97ef-3427114b483b/bridge/1.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.701154 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_6ea3f691-f7ad-4d69-97ef-3427114b483b/bridge/2.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.705477 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-8twkx_6ea3f691-f7ad-4d69-97ef-3427114b483b/sg-core/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.723348 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-h996c_199561ac-4446-4757-99b3-d9be6b135398/default-interconnect/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.732296 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-zbs45_e1a540ef-a29d-4944-b473-16c7efe8d573/prometheus-webhook-snmp/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.745700 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm_9c60c1de-970b-47e8-8a22-802ae60cd8ba/extract/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.755557 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm_9c60c1de-970b-47e8-8a22-802ae60cd8ba/util/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.763915 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_e2ca482ba9d5013d52294c83a5aba8d96bfcab343aa05dec9349fd22154nlsm_9c60c1de-970b-47e8-8a22-802ae60cd8ba/pull/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.803622 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-bb4ffb7f7-gzmw4_9a3ae8fc-094f-4ee5-8c57-c1ddbccfc06d/manager/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.830871 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_03846613-371f-48c5-b48d-268666ac73fe/elasticsearch/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.842394 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_03846613-371f-48c5-b48d-268666ac73fe/elastic-internal-init-filesystem/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.847585 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_03846613-371f-48c5-b48d-268666ac73fe/elastic-internal-suspend/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.860000 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_infrawatch-operators-28z9j_5dd27e48-6f78-4869-9700-39a70a006a4b/registry-server/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.872718 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-5bb49f789d-bpfb4_3a7811c1-b3f1-436d-b98b-28a957a4a7bf/interconnect-operator/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.890482 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fc166550-60f0-4fee-a249-db0689c07f60/prometheus/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.898326 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fc166550-60f0-4fee-a249-db0689c07f60/config-reloader/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.905074 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fc166550-60f0-4fee-a249-db0689c07f60/oauth-proxy/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.912964 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fc166550-60f0-4fee-a249-db0689c07f60/init-config-reloader/0.log" Jan 21 10:25:30 crc kubenswrapper[4684]: I0121 10:25:30.946296 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_fdca0836-0709-42a3-9fe5-78e3913422aa/qdr/0.log" Jan 21 10:25:31 crc kubenswrapper[4684]: I0121 10:25:31.142969 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-68688768b9-92w7t_3cc1ef89-ee68-427f-bd4e-77d27c77f8c4/operator/0.log" Jan 21 10:25:34 crc kubenswrapper[4684]: I0121 10:25:34.391475 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-77c9d9f969-mbxq6_b8ec58ca-1a38-4f05-9394-90f7eac34be8/operator/0.log" Jan 21 10:25:34 crc kubenswrapper[4684]: I0121 10:25:34.431850 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-mhp4b_40476f9d-b18f-40ca-ac65-eac8a8ac1639/smoketest-collectd/0.log" Jan 21 10:25:34 crc kubenswrapper[4684]: I0121 10:25:34.437886 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-mhp4b_40476f9d-b18f-40ca-ac65-eac8a8ac1639/smoketest-ceilometer/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.638870 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/3.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.658948 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6jwd4_1e7ac4c6-b960-418c-b057-e55d95a213cd/kube-multus/2.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.676862 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/kube-multus-additional-cni-plugins/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.684293 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/egress-router-binary-copy/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.691015 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/cni-plugins/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.697338 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/bond-cni-plugin/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.704792 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/routeoverride-cni/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.711826 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/whereabouts-cni-bincopy/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.718383 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-gshl8_c2704b9e-474a-466a-b78c-d136a2f95a3b/whereabouts-cni/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.728860 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-dg7k7_0d9a24c5-ed0d-473e-a562-ee476c663ed5/multus-admission-controller/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.737042 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-dg7k7_0d9a24c5-ed0d-473e-a562-ee476c663ed5/kube-rbac-proxy/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.761154 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-7wzh7_49971ee3-e56a-4d50-8fc5-231bdcfc92d5/network-metrics-daemon/0.log" Jan 21 10:25:35 crc kubenswrapper[4684]: I0121 10:25:35.765891 4684 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-7wzh7_49971ee3-e56a-4d50-8fc5-231bdcfc92d5/kube-rbac-proxy/0.log" Jan 21 10:25:37 crc kubenswrapper[4684]: I0121 10:25:37.302542 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:25:37 crc kubenswrapper[4684]: I0121 10:25:37.302912 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:25:37 crc kubenswrapper[4684]: I0121 10:25:37.302963 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:25:37 crc kubenswrapper[4684]: I0121 10:25:37.303623 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47abcdb0c669edba9be769bd0f81dd5ed365b01f4428fe3808be955facfe8deb"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:25:37 crc kubenswrapper[4684]: I0121 10:25:37.303689 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://47abcdb0c669edba9be769bd0f81dd5ed365b01f4428fe3808be955facfe8deb" gracePeriod=600 Jan 21 10:25:38 crc kubenswrapper[4684]: I0121 10:25:38.220235 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="47abcdb0c669edba9be769bd0f81dd5ed365b01f4428fe3808be955facfe8deb" exitCode=0 Jan 21 10:25:38 crc kubenswrapper[4684]: I0121 10:25:38.220319 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"47abcdb0c669edba9be769bd0f81dd5ed365b01f4428fe3808be955facfe8deb"} Jan 21 10:25:38 crc kubenswrapper[4684]: I0121 10:25:38.220591 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"0d0fb07716548375205a1c7832495f9b372e76b06daf4639ad84e8fdc9a5fec0"} Jan 21 10:25:38 crc kubenswrapper[4684]: I0121 10:25:38.220613 4684 scope.go:117] "RemoveContainer" containerID="6f4462918383bc467ad7f03eeec652df2cace8658f28e603ce544d6d7137b741" Jan 21 10:27:37 crc kubenswrapper[4684]: I0121 10:27:37.302500 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:27:37 crc kubenswrapper[4684]: I0121 10:27:37.303283 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:28:07 crc kubenswrapper[4684]: I0121 10:28:07.302352 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:28:07 crc kubenswrapper[4684]: I0121 10:28:07.304104 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.302586 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.303097 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.303141 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.303773 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d0fb07716548375205a1c7832495f9b372e76b06daf4639ad84e8fdc9a5fec0"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.303831 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://0d0fb07716548375205a1c7832495f9b372e76b06daf4639ad84e8fdc9a5fec0" gracePeriod=600 Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.589584 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="0d0fb07716548375205a1c7832495f9b372e76b06daf4639ad84e8fdc9a5fec0" exitCode=0 Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.589803 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"0d0fb07716548375205a1c7832495f9b372e76b06daf4639ad84e8fdc9a5fec0"} Jan 21 10:28:37 crc kubenswrapper[4684]: I0121 10:28:37.590195 4684 scope.go:117] "RemoveContainer" containerID="47abcdb0c669edba9be769bd0f81dd5ed365b01f4428fe3808be955facfe8deb" Jan 21 10:28:38 crc kubenswrapper[4684]: I0121 10:28:38.598394 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56"} Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.894117 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drchl"] Jan 21 10:28:40 crc kubenswrapper[4684]: E0121 10:28:40.895036 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0dfb990-f684-46ae-8f48-f58090e2f3cb" containerName="registry-server" Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.895066 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0dfb990-f684-46ae-8f48-f58090e2f3cb" containerName="registry-server" Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.895330 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0dfb990-f684-46ae-8f48-f58090e2f3cb" containerName="registry-server" Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.897415 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.916580 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drchl"] Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.944757 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgn2\" (UniqueName: \"kubernetes.io/projected/cabdab1a-a30a-4748-81bd-efb016854c20-kube-api-access-ghgn2\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.944877 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-utilities\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:40 crc kubenswrapper[4684]: I0121 10:28:40.944921 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-catalog-content\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.046280 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-catalog-content\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.046447 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgn2\" (UniqueName: \"kubernetes.io/projected/cabdab1a-a30a-4748-81bd-efb016854c20-kube-api-access-ghgn2\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.046542 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-utilities\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.047470 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-catalog-content\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.047509 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-utilities\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.069346 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgn2\" (UniqueName: \"kubernetes.io/projected/cabdab1a-a30a-4748-81bd-efb016854c20-kube-api-access-ghgn2\") pod \"redhat-operators-drchl\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.219992 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.467196 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drchl"] Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.635548 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerStarted","Data":"e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0"} Jan 21 10:28:41 crc kubenswrapper[4684]: I0121 10:28:41.635813 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerStarted","Data":"af55e1e7ecbc3e09fae0586130edbb442895bbe141acc7c8e79ac3de6193c876"} Jan 21 10:28:42 crc kubenswrapper[4684]: I0121 10:28:42.659613 4684 generic.go:334] "Generic (PLEG): container finished" podID="cabdab1a-a30a-4748-81bd-efb016854c20" containerID="e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0" exitCode=0 Jan 21 10:28:42 crc kubenswrapper[4684]: I0121 10:28:42.659978 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerDied","Data":"e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0"} Jan 21 10:28:43 crc kubenswrapper[4684]: I0121 10:28:43.682606 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerStarted","Data":"9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa"} Jan 21 10:28:44 crc kubenswrapper[4684]: I0121 10:28:44.702057 4684 generic.go:334] "Generic (PLEG): container finished" podID="cabdab1a-a30a-4748-81bd-efb016854c20" containerID="9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa" exitCode=0 Jan 21 10:28:44 crc kubenswrapper[4684]: I0121 10:28:44.702114 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerDied","Data":"9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa"} Jan 21 10:28:47 crc kubenswrapper[4684]: I0121 10:28:47.729067 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerStarted","Data":"0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4"} Jan 21 10:28:47 crc kubenswrapper[4684]: I0121 10:28:47.755680 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drchl" podStartSLOduration=2.795208875 podStartE2EDuration="7.755662872s" podCreationTimestamp="2026-01-21 10:28:40 +0000 UTC" firstStartedPulling="2026-01-21 10:28:41.637256311 +0000 UTC m=+1359.395339278" lastFinishedPulling="2026-01-21 10:28:46.597710298 +0000 UTC m=+1364.355793275" observedRunningTime="2026-01-21 10:28:47.751695658 +0000 UTC m=+1365.509778635" watchObservedRunningTime="2026-01-21 10:28:47.755662872 +0000 UTC m=+1365.513745839" Jan 21 10:28:51 crc kubenswrapper[4684]: I0121 10:28:51.220241 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:51 crc kubenswrapper[4684]: I0121 10:28:51.220636 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:28:52 crc kubenswrapper[4684]: I0121 10:28:52.268454 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drchl" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="registry-server" probeResult="failure" output=< Jan 21 10:28:52 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:28:52 crc kubenswrapper[4684]: > Jan 21 10:29:01 crc kubenswrapper[4684]: I0121 10:29:01.269673 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:29:01 crc kubenswrapper[4684]: I0121 10:29:01.312432 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:29:01 crc kubenswrapper[4684]: I0121 10:29:01.509973 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drchl"] Jan 21 10:29:02 crc kubenswrapper[4684]: I0121 10:29:02.856511 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drchl" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="registry-server" containerID="cri-o://0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4" gracePeriod=2 Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.264821 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.336996 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgn2\" (UniqueName: \"kubernetes.io/projected/cabdab1a-a30a-4748-81bd-efb016854c20-kube-api-access-ghgn2\") pod \"cabdab1a-a30a-4748-81bd-efb016854c20\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.337076 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-utilities\") pod \"cabdab1a-a30a-4748-81bd-efb016854c20\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.337153 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-catalog-content\") pod \"cabdab1a-a30a-4748-81bd-efb016854c20\" (UID: \"cabdab1a-a30a-4748-81bd-efb016854c20\") " Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.339799 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-utilities" (OuterVolumeSpecName: "utilities") pod "cabdab1a-a30a-4748-81bd-efb016854c20" (UID: "cabdab1a-a30a-4748-81bd-efb016854c20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.366727 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cabdab1a-a30a-4748-81bd-efb016854c20-kube-api-access-ghgn2" (OuterVolumeSpecName: "kube-api-access-ghgn2") pod "cabdab1a-a30a-4748-81bd-efb016854c20" (UID: "cabdab1a-a30a-4748-81bd-efb016854c20"). InnerVolumeSpecName "kube-api-access-ghgn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.439065 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgn2\" (UniqueName: \"kubernetes.io/projected/cabdab1a-a30a-4748-81bd-efb016854c20-kube-api-access-ghgn2\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.439174 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.474043 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cabdab1a-a30a-4748-81bd-efb016854c20" (UID: "cabdab1a-a30a-4748-81bd-efb016854c20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.540799 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cabdab1a-a30a-4748-81bd-efb016854c20-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.865982 4684 generic.go:334] "Generic (PLEG): container finished" podID="cabdab1a-a30a-4748-81bd-efb016854c20" containerID="0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4" exitCode=0 Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.866036 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerDied","Data":"0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4"} Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.866068 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drchl" event={"ID":"cabdab1a-a30a-4748-81bd-efb016854c20","Type":"ContainerDied","Data":"af55e1e7ecbc3e09fae0586130edbb442895bbe141acc7c8e79ac3de6193c876"} Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.866090 4684 scope.go:117] "RemoveContainer" containerID="0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.866228 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drchl" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.900532 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drchl"] Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.902561 4684 scope.go:117] "RemoveContainer" containerID="9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.907465 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drchl"] Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.935170 4684 scope.go:117] "RemoveContainer" containerID="e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.951526 4684 scope.go:117] "RemoveContainer" containerID="0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4" Jan 21 10:29:03 crc kubenswrapper[4684]: E0121 10:29:03.952190 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4\": container with ID starting with 0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4 not found: ID does not exist" containerID="0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.952245 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4"} err="failed to get container status \"0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4\": rpc error: code = NotFound desc = could not find container \"0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4\": container with ID starting with 0f7f057b124bcfc023da65f339d38eff6cb03c3fed1e4393282be38f9ebcd4e4 not found: ID does not exist" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.952278 4684 scope.go:117] "RemoveContainer" containerID="9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa" Jan 21 10:29:03 crc kubenswrapper[4684]: E0121 10:29:03.952668 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa\": container with ID starting with 9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa not found: ID does not exist" containerID="9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.952706 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa"} err="failed to get container status \"9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa\": rpc error: code = NotFound desc = could not find container \"9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa\": container with ID starting with 9e802b533c4fd5cde701c5e0e4e340c13c65bf3b632996a11314a7d443e432aa not found: ID does not exist" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.952725 4684 scope.go:117] "RemoveContainer" containerID="e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0" Jan 21 10:29:03 crc kubenswrapper[4684]: E0121 10:29:03.953083 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0\": container with ID starting with e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0 not found: ID does not exist" containerID="e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0" Jan 21 10:29:03 crc kubenswrapper[4684]: I0121 10:29:03.953111 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0"} err="failed to get container status \"e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0\": rpc error: code = NotFound desc = could not find container \"e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0\": container with ID starting with e4e7f4988913778cf17f885f4d6424840bd5db3fc6908a8f289b69a8afc477f0 not found: ID does not exist" Jan 21 10:29:04 crc kubenswrapper[4684]: I0121 10:29:04.528949 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" path="/var/lib/kubelet/pods/cabdab1a-a30a-4748-81bd-efb016854c20/volumes" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.135084 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hbfcd"] Jan 21 10:29:26 crc kubenswrapper[4684]: E0121 10:29:26.135851 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="extract-utilities" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.135864 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="extract-utilities" Jan 21 10:29:26 crc kubenswrapper[4684]: E0121 10:29:26.135880 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="extract-content" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.135886 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="extract-content" Jan 21 10:29:26 crc kubenswrapper[4684]: E0121 10:29:26.135899 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="registry-server" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.135905 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="registry-server" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.136027 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="cabdab1a-a30a-4748-81bd-efb016854c20" containerName="registry-server" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.136919 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.158448 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbfcd"] Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.239516 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-catalog-content\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.239746 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrrm\" (UniqueName: \"kubernetes.io/projected/7f60a281-d89c-4816-b1f7-eb7b689978f7-kube-api-access-slrrm\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.239814 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-utilities\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.341112 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrrm\" (UniqueName: \"kubernetes.io/projected/7f60a281-d89c-4816-b1f7-eb7b689978f7-kube-api-access-slrrm\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.341179 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-utilities\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.341232 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-catalog-content\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.341858 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-catalog-content\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.342022 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-utilities\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.373177 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrrm\" (UniqueName: \"kubernetes.io/projected/7f60a281-d89c-4816-b1f7-eb7b689978f7-kube-api-access-slrrm\") pod \"certified-operators-hbfcd\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.464263 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:26 crc kubenswrapper[4684]: I0121 10:29:26.728316 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbfcd"] Jan 21 10:29:27 crc kubenswrapper[4684]: I0121 10:29:27.048172 4684 generic.go:334] "Generic (PLEG): container finished" podID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerID="737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2" exitCode=0 Jan 21 10:29:27 crc kubenswrapper[4684]: I0121 10:29:27.048281 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfcd" event={"ID":"7f60a281-d89c-4816-b1f7-eb7b689978f7","Type":"ContainerDied","Data":"737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2"} Jan 21 10:29:27 crc kubenswrapper[4684]: I0121 10:29:27.048486 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfcd" event={"ID":"7f60a281-d89c-4816-b1f7-eb7b689978f7","Type":"ContainerStarted","Data":"d7e45eb24407a6709f9406a275f564bb5db46cee99c3fa1dc5b7695deaad66ed"} Jan 21 10:29:29 crc kubenswrapper[4684]: I0121 10:29:29.064082 4684 generic.go:334] "Generic (PLEG): container finished" podID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerID="c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888" exitCode=0 Jan 21 10:29:29 crc kubenswrapper[4684]: I0121 10:29:29.064686 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfcd" event={"ID":"7f60a281-d89c-4816-b1f7-eb7b689978f7","Type":"ContainerDied","Data":"c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888"} Jan 21 10:29:30 crc kubenswrapper[4684]: I0121 10:29:30.073374 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfcd" event={"ID":"7f60a281-d89c-4816-b1f7-eb7b689978f7","Type":"ContainerStarted","Data":"5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0"} Jan 21 10:29:30 crc kubenswrapper[4684]: I0121 10:29:30.095873 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hbfcd" podStartSLOduration=1.57520545 podStartE2EDuration="4.095853535s" podCreationTimestamp="2026-01-21 10:29:26 +0000 UTC" firstStartedPulling="2026-01-21 10:29:27.049476278 +0000 UTC m=+1404.807559245" lastFinishedPulling="2026-01-21 10:29:29.570124363 +0000 UTC m=+1407.328207330" observedRunningTime="2026-01-21 10:29:30.091003195 +0000 UTC m=+1407.849086162" watchObservedRunningTime="2026-01-21 10:29:30.095853535 +0000 UTC m=+1407.853936502" Jan 21 10:29:36 crc kubenswrapper[4684]: I0121 10:29:36.465251 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:36 crc kubenswrapper[4684]: I0121 10:29:36.467334 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:36 crc kubenswrapper[4684]: I0121 10:29:36.529972 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:37 crc kubenswrapper[4684]: I0121 10:29:37.204216 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:37 crc kubenswrapper[4684]: I0121 10:29:37.277090 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbfcd"] Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.145198 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hbfcd" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="registry-server" containerID="cri-o://5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0" gracePeriod=2 Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.539065 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.681646 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrrm\" (UniqueName: \"kubernetes.io/projected/7f60a281-d89c-4816-b1f7-eb7b689978f7-kube-api-access-slrrm\") pod \"7f60a281-d89c-4816-b1f7-eb7b689978f7\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.681721 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-catalog-content\") pod \"7f60a281-d89c-4816-b1f7-eb7b689978f7\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.681797 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-utilities\") pod \"7f60a281-d89c-4816-b1f7-eb7b689978f7\" (UID: \"7f60a281-d89c-4816-b1f7-eb7b689978f7\") " Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.682766 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-utilities" (OuterVolumeSpecName: "utilities") pod "7f60a281-d89c-4816-b1f7-eb7b689978f7" (UID: "7f60a281-d89c-4816-b1f7-eb7b689978f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.701621 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f60a281-d89c-4816-b1f7-eb7b689978f7-kube-api-access-slrrm" (OuterVolumeSpecName: "kube-api-access-slrrm") pod "7f60a281-d89c-4816-b1f7-eb7b689978f7" (UID: "7f60a281-d89c-4816-b1f7-eb7b689978f7"). InnerVolumeSpecName "kube-api-access-slrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.770192 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f60a281-d89c-4816-b1f7-eb7b689978f7" (UID: "7f60a281-d89c-4816-b1f7-eb7b689978f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.793045 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.793098 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f60a281-d89c-4816-b1f7-eb7b689978f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:39 crc kubenswrapper[4684]: I0121 10:29:39.793119 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slrrm\" (UniqueName: \"kubernetes.io/projected/7f60a281-d89c-4816-b1f7-eb7b689978f7-kube-api-access-slrrm\") on node \"crc\" DevicePath \"\"" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.155404 4684 generic.go:334] "Generic (PLEG): container finished" podID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerID="5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0" exitCode=0 Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.155471 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfcd" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.155473 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfcd" event={"ID":"7f60a281-d89c-4816-b1f7-eb7b689978f7","Type":"ContainerDied","Data":"5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0"} Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.155560 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfcd" event={"ID":"7f60a281-d89c-4816-b1f7-eb7b689978f7","Type":"ContainerDied","Data":"d7e45eb24407a6709f9406a275f564bb5db46cee99c3fa1dc5b7695deaad66ed"} Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.155604 4684 scope.go:117] "RemoveContainer" containerID="5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.174445 4684 scope.go:117] "RemoveContainer" containerID="c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.198552 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbfcd"] Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.203452 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hbfcd"] Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.220246 4684 scope.go:117] "RemoveContainer" containerID="737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.240233 4684 scope.go:117] "RemoveContainer" containerID="5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0" Jan 21 10:29:40 crc kubenswrapper[4684]: E0121 10:29:40.240796 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0\": container with ID starting with 5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0 not found: ID does not exist" containerID="5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.240841 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0"} err="failed to get container status \"5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0\": rpc error: code = NotFound desc = could not find container \"5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0\": container with ID starting with 5d82b2156e0b2f6d76388605dfc1dddd0146d8d6deecb0e39e56119368e464e0 not found: ID does not exist" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.240861 4684 scope.go:117] "RemoveContainer" containerID="c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888" Jan 21 10:29:40 crc kubenswrapper[4684]: E0121 10:29:40.241160 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888\": container with ID starting with c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888 not found: ID does not exist" containerID="c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.241201 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888"} err="failed to get container status \"c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888\": rpc error: code = NotFound desc = could not find container \"c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888\": container with ID starting with c06ff4486cf9c13200deb5e31cc2ddec6432d2813a3d393679147bce08461888 not found: ID does not exist" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.241228 4684 scope.go:117] "RemoveContainer" containerID="737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2" Jan 21 10:29:40 crc kubenswrapper[4684]: E0121 10:29:40.241501 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2\": container with ID starting with 737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2 not found: ID does not exist" containerID="737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.241539 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2"} err="failed to get container status \"737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2\": rpc error: code = NotFound desc = could not find container \"737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2\": container with ID starting with 737f95b2d1d95cdab6e2afbc9d3456b465b31d1b276959213ca675abc5c89cd2 not found: ID does not exist" Jan 21 10:29:40 crc kubenswrapper[4684]: I0121 10:29:40.526230 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" path="/var/lib/kubelet/pods/7f60a281-d89c-4816-b1f7-eb7b689978f7/volumes" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.144328 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5"] Jan 21 10:30:00 crc kubenswrapper[4684]: E0121 10:30:00.145246 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="extract-utilities" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.145264 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="extract-utilities" Jan 21 10:30:00 crc kubenswrapper[4684]: E0121 10:30:00.145278 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="extract-content" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.145290 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="extract-content" Jan 21 10:30:00 crc kubenswrapper[4684]: E0121 10:30:00.145317 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="registry-server" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.145324 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="registry-server" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.145555 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f60a281-d89c-4816-b1f7-eb7b689978f7" containerName="registry-server" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.146121 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.150967 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.151051 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.159861 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5"] Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.224147 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcq2b\" (UniqueName: \"kubernetes.io/projected/69dbd59f-4f84-4d28-9ac9-24553825dfd5-kube-api-access-bcq2b\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.224309 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69dbd59f-4f84-4d28-9ac9-24553825dfd5-config-volume\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.224348 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69dbd59f-4f84-4d28-9ac9-24553825dfd5-secret-volume\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.325836 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcq2b\" (UniqueName: \"kubernetes.io/projected/69dbd59f-4f84-4d28-9ac9-24553825dfd5-kube-api-access-bcq2b\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.325922 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69dbd59f-4f84-4d28-9ac9-24553825dfd5-config-volume\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.325952 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69dbd59f-4f84-4d28-9ac9-24553825dfd5-secret-volume\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.327660 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69dbd59f-4f84-4d28-9ac9-24553825dfd5-config-volume\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.333018 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69dbd59f-4f84-4d28-9ac9-24553825dfd5-secret-volume\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.351626 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcq2b\" (UniqueName: \"kubernetes.io/projected/69dbd59f-4f84-4d28-9ac9-24553825dfd5-kube-api-access-bcq2b\") pod \"collect-profiles-29483190-vwgb5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.465425 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:00 crc kubenswrapper[4684]: I0121 10:30:00.671754 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5"] Jan 21 10:30:01 crc kubenswrapper[4684]: I0121 10:30:01.352459 4684 generic.go:334] "Generic (PLEG): container finished" podID="69dbd59f-4f84-4d28-9ac9-24553825dfd5" containerID="766aac7c468004c6bb3276027fa451785351eda2c35df10d20f220194e9df58a" exitCode=0 Jan 21 10:30:01 crc kubenswrapper[4684]: I0121 10:30:01.352678 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" event={"ID":"69dbd59f-4f84-4d28-9ac9-24553825dfd5","Type":"ContainerDied","Data":"766aac7c468004c6bb3276027fa451785351eda2c35df10d20f220194e9df58a"} Jan 21 10:30:01 crc kubenswrapper[4684]: I0121 10:30:01.352704 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" event={"ID":"69dbd59f-4f84-4d28-9ac9-24553825dfd5","Type":"ContainerStarted","Data":"17e68f3043a7cc4e4117b7d9b93b520034a136fdd867e746232537a39aecbf52"} Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.676998 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.768399 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69dbd59f-4f84-4d28-9ac9-24553825dfd5-secret-volume\") pod \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.768451 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcq2b\" (UniqueName: \"kubernetes.io/projected/69dbd59f-4f84-4d28-9ac9-24553825dfd5-kube-api-access-bcq2b\") pod \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.768518 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69dbd59f-4f84-4d28-9ac9-24553825dfd5-config-volume\") pod \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\" (UID: \"69dbd59f-4f84-4d28-9ac9-24553825dfd5\") " Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.769525 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69dbd59f-4f84-4d28-9ac9-24553825dfd5-config-volume" (OuterVolumeSpecName: "config-volume") pod "69dbd59f-4f84-4d28-9ac9-24553825dfd5" (UID: "69dbd59f-4f84-4d28-9ac9-24553825dfd5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.776037 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69dbd59f-4f84-4d28-9ac9-24553825dfd5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "69dbd59f-4f84-4d28-9ac9-24553825dfd5" (UID: "69dbd59f-4f84-4d28-9ac9-24553825dfd5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.776419 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69dbd59f-4f84-4d28-9ac9-24553825dfd5-kube-api-access-bcq2b" (OuterVolumeSpecName: "kube-api-access-bcq2b") pod "69dbd59f-4f84-4d28-9ac9-24553825dfd5" (UID: "69dbd59f-4f84-4d28-9ac9-24553825dfd5"). InnerVolumeSpecName "kube-api-access-bcq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.870730 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69dbd59f-4f84-4d28-9ac9-24553825dfd5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.870822 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcq2b\" (UniqueName: \"kubernetes.io/projected/69dbd59f-4f84-4d28-9ac9-24553825dfd5-kube-api-access-bcq2b\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:02 crc kubenswrapper[4684]: I0121 10:30:02.870847 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/69dbd59f-4f84-4d28-9ac9-24553825dfd5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:03 crc kubenswrapper[4684]: I0121 10:30:03.371397 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" event={"ID":"69dbd59f-4f84-4d28-9ac9-24553825dfd5","Type":"ContainerDied","Data":"17e68f3043a7cc4e4117b7d9b93b520034a136fdd867e746232537a39aecbf52"} Jan 21 10:30:03 crc kubenswrapper[4684]: I0121 10:30:03.371443 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e68f3043a7cc4e4117b7d9b93b520034a136fdd867e746232537a39aecbf52" Jan 21 10:30:03 crc kubenswrapper[4684]: I0121 10:30:03.371519 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-vwgb5" Jan 21 10:30:37 crc kubenswrapper[4684]: I0121 10:30:37.302037 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:30:37 crc kubenswrapper[4684]: I0121 10:30:37.302653 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:30:41 crc kubenswrapper[4684]: I0121 10:30:41.913335 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-x5xmz"] Jan 21 10:30:41 crc kubenswrapper[4684]: E0121 10:30:41.914259 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69dbd59f-4f84-4d28-9ac9-24553825dfd5" containerName="collect-profiles" Jan 21 10:30:41 crc kubenswrapper[4684]: I0121 10:30:41.914276 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="69dbd59f-4f84-4d28-9ac9-24553825dfd5" containerName="collect-profiles" Jan 21 10:30:41 crc kubenswrapper[4684]: I0121 10:30:41.914468 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="69dbd59f-4f84-4d28-9ac9-24553825dfd5" containerName="collect-profiles" Jan 21 10:30:41 crc kubenswrapper[4684]: I0121 10:30:41.914976 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:41 crc kubenswrapper[4684]: I0121 10:30:41.928091 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-x5xmz"] Jan 21 10:30:41 crc kubenswrapper[4684]: I0121 10:30:41.988440 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tm8q\" (UniqueName: \"kubernetes.io/projected/3ce73930-a038-4adf-a756-0419da6dd9b6-kube-api-access-7tm8q\") pod \"infrawatch-operators-x5xmz\" (UID: \"3ce73930-a038-4adf-a756-0419da6dd9b6\") " pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:42 crc kubenswrapper[4684]: I0121 10:30:42.089738 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tm8q\" (UniqueName: \"kubernetes.io/projected/3ce73930-a038-4adf-a756-0419da6dd9b6-kube-api-access-7tm8q\") pod \"infrawatch-operators-x5xmz\" (UID: \"3ce73930-a038-4adf-a756-0419da6dd9b6\") " pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:42 crc kubenswrapper[4684]: I0121 10:30:42.126320 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tm8q\" (UniqueName: \"kubernetes.io/projected/3ce73930-a038-4adf-a756-0419da6dd9b6-kube-api-access-7tm8q\") pod \"infrawatch-operators-x5xmz\" (UID: \"3ce73930-a038-4adf-a756-0419da6dd9b6\") " pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:42 crc kubenswrapper[4684]: I0121 10:30:42.252730 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:42 crc kubenswrapper[4684]: I0121 10:30:42.479623 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-x5xmz"] Jan 21 10:30:42 crc kubenswrapper[4684]: I0121 10:30:42.496464 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:30:42 crc kubenswrapper[4684]: I0121 10:30:42.675821 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-x5xmz" event={"ID":"3ce73930-a038-4adf-a756-0419da6dd9b6","Type":"ContainerStarted","Data":"ededbc6fd7cf94e5bf5169d3f7f486d4d3d41853468c8b3cf36cd261798a848f"} Jan 21 10:30:43 crc kubenswrapper[4684]: I0121 10:30:43.683533 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-x5xmz" event={"ID":"3ce73930-a038-4adf-a756-0419da6dd9b6","Type":"ContainerStarted","Data":"e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639"} Jan 21 10:30:44 crc kubenswrapper[4684]: I0121 10:30:44.705969 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-x5xmz" podStartSLOduration=2.760155267 podStartE2EDuration="3.705953467s" podCreationTimestamp="2026-01-21 10:30:41 +0000 UTC" firstStartedPulling="2026-01-21 10:30:42.496242081 +0000 UTC m=+1480.254325048" lastFinishedPulling="2026-01-21 10:30:43.442040291 +0000 UTC m=+1481.200123248" observedRunningTime="2026-01-21 10:30:44.701899721 +0000 UTC m=+1482.459982678" watchObservedRunningTime="2026-01-21 10:30:44.705953467 +0000 UTC m=+1482.464036434" Jan 21 10:30:52 crc kubenswrapper[4684]: I0121 10:30:52.253530 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:52 crc kubenswrapper[4684]: I0121 10:30:52.253830 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:52 crc kubenswrapper[4684]: I0121 10:30:52.278341 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:52 crc kubenswrapper[4684]: I0121 10:30:52.757548 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:52 crc kubenswrapper[4684]: I0121 10:30:52.818104 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-x5xmz"] Jan 21 10:30:54 crc kubenswrapper[4684]: I0121 10:30:54.748800 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-x5xmz" podUID="3ce73930-a038-4adf-a756-0419da6dd9b6" containerName="registry-server" containerID="cri-o://e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639" gracePeriod=2 Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.642340 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.759226 4684 generic.go:334] "Generic (PLEG): container finished" podID="3ce73930-a038-4adf-a756-0419da6dd9b6" containerID="e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639" exitCode=0 Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.759282 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-x5xmz" event={"ID":"3ce73930-a038-4adf-a756-0419da6dd9b6","Type":"ContainerDied","Data":"e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639"} Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.759316 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-x5xmz" event={"ID":"3ce73930-a038-4adf-a756-0419da6dd9b6","Type":"ContainerDied","Data":"ededbc6fd7cf94e5bf5169d3f7f486d4d3d41853468c8b3cf36cd261798a848f"} Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.759336 4684 scope.go:117] "RemoveContainer" containerID="e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639" Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.759493 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-x5xmz" Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.779590 4684 scope.go:117] "RemoveContainer" containerID="e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639" Jan 21 10:30:55 crc kubenswrapper[4684]: E0121 10:30:55.780076 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639\": container with ID starting with e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639 not found: ID does not exist" containerID="e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639" Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.780138 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639"} err="failed to get container status \"e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639\": rpc error: code = NotFound desc = could not find container \"e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639\": container with ID starting with e5a4f82c5a7821fb68ea7011c8e25fed823e6719096c5fdd4d4d4c7137f3b639 not found: ID does not exist" Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.829874 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tm8q\" (UniqueName: \"kubernetes.io/projected/3ce73930-a038-4adf-a756-0419da6dd9b6-kube-api-access-7tm8q\") pod \"3ce73930-a038-4adf-a756-0419da6dd9b6\" (UID: \"3ce73930-a038-4adf-a756-0419da6dd9b6\") " Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.836783 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ce73930-a038-4adf-a756-0419da6dd9b6-kube-api-access-7tm8q" (OuterVolumeSpecName: "kube-api-access-7tm8q") pod "3ce73930-a038-4adf-a756-0419da6dd9b6" (UID: "3ce73930-a038-4adf-a756-0419da6dd9b6"). InnerVolumeSpecName "kube-api-access-7tm8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:30:55 crc kubenswrapper[4684]: I0121 10:30:55.931705 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tm8q\" (UniqueName: \"kubernetes.io/projected/3ce73930-a038-4adf-a756-0419da6dd9b6-kube-api-access-7tm8q\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:56 crc kubenswrapper[4684]: I0121 10:30:56.094734 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-x5xmz"] Jan 21 10:30:56 crc kubenswrapper[4684]: I0121 10:30:56.101745 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-x5xmz"] Jan 21 10:30:56 crc kubenswrapper[4684]: I0121 10:30:56.522244 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ce73930-a038-4adf-a756-0419da6dd9b6" path="/var/lib/kubelet/pods/3ce73930-a038-4adf-a756-0419da6dd9b6/volumes" Jan 21 10:31:07 crc kubenswrapper[4684]: I0121 10:31:07.302715 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:31:07 crc kubenswrapper[4684]: I0121 10:31:07.305347 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:31:37 crc kubenswrapper[4684]: I0121 10:31:37.303613 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:31:37 crc kubenswrapper[4684]: I0121 10:31:37.304601 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:31:37 crc kubenswrapper[4684]: I0121 10:31:37.304672 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:31:37 crc kubenswrapper[4684]: I0121 10:31:37.305622 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:31:37 crc kubenswrapper[4684]: I0121 10:31:37.305672 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" gracePeriod=600 Jan 21 10:31:37 crc kubenswrapper[4684]: E0121 10:31:37.443301 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:31:38 crc kubenswrapper[4684]: I0121 10:31:38.083422 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" exitCode=0 Jan 21 10:31:38 crc kubenswrapper[4684]: I0121 10:31:38.083487 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56"} Jan 21 10:31:38 crc kubenswrapper[4684]: I0121 10:31:38.083762 4684 scope.go:117] "RemoveContainer" containerID="0d0fb07716548375205a1c7832495f9b372e76b06daf4639ad84e8fdc9a5fec0" Jan 21 10:31:38 crc kubenswrapper[4684]: I0121 10:31:38.084240 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:31:38 crc kubenswrapper[4684]: E0121 10:31:38.084509 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:31:51 crc kubenswrapper[4684]: I0121 10:31:51.514699 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:31:51 crc kubenswrapper[4684]: E0121 10:31:51.515525 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:32:04 crc kubenswrapper[4684]: I0121 10:32:04.514406 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:32:04 crc kubenswrapper[4684]: E0121 10:32:04.515252 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.665490 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wmnzf"] Jan 21 10:32:06 crc kubenswrapper[4684]: E0121 10:32:06.667246 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ce73930-a038-4adf-a756-0419da6dd9b6" containerName="registry-server" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.667320 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ce73930-a038-4adf-a756-0419da6dd9b6" containerName="registry-server" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.667513 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ce73930-a038-4adf-a756-0419da6dd9b6" containerName="registry-server" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.668444 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.692335 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmnzf"] Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.834209 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-utilities\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.834289 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-catalog-content\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.834395 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88xb\" (UniqueName: \"kubernetes.io/projected/8302dd4d-f927-4216-803a-29d735045608-kube-api-access-d88xb\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.936267 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-utilities\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.936341 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-catalog-content\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.936442 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d88xb\" (UniqueName: \"kubernetes.io/projected/8302dd4d-f927-4216-803a-29d735045608-kube-api-access-d88xb\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.936782 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-utilities\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.936876 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-catalog-content\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:06 crc kubenswrapper[4684]: I0121 10:32:06.959227 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88xb\" (UniqueName: \"kubernetes.io/projected/8302dd4d-f927-4216-803a-29d735045608-kube-api-access-d88xb\") pod \"community-operators-wmnzf\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:07 crc kubenswrapper[4684]: I0121 10:32:07.000033 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:07 crc kubenswrapper[4684]: I0121 10:32:07.267728 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmnzf"] Jan 21 10:32:07 crc kubenswrapper[4684]: I0121 10:32:07.317481 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerStarted","Data":"ec069724cf22d21f12900468c164cd34dfdc77babbcbc98891f86464a1444cb3"} Jan 21 10:32:08 crc kubenswrapper[4684]: I0121 10:32:08.326703 4684 generic.go:334] "Generic (PLEG): container finished" podID="8302dd4d-f927-4216-803a-29d735045608" containerID="cfd7ce3e880133a8fb4a5fc0d74d0946c4c3c452c27de464cd33a8fcebd984d7" exitCode=0 Jan 21 10:32:08 crc kubenswrapper[4684]: I0121 10:32:08.326928 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerDied","Data":"cfd7ce3e880133a8fb4a5fc0d74d0946c4c3c452c27de464cd33a8fcebd984d7"} Jan 21 10:32:09 crc kubenswrapper[4684]: I0121 10:32:09.334386 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerStarted","Data":"a3cf40d708a23d5830eacd36236888d8c0cf04cf6d7d0d8ca8d3121d8a111fc8"} Jan 21 10:32:10 crc kubenswrapper[4684]: I0121 10:32:10.343556 4684 generic.go:334] "Generic (PLEG): container finished" podID="8302dd4d-f927-4216-803a-29d735045608" containerID="a3cf40d708a23d5830eacd36236888d8c0cf04cf6d7d0d8ca8d3121d8a111fc8" exitCode=0 Jan 21 10:32:10 crc kubenswrapper[4684]: I0121 10:32:10.343598 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerDied","Data":"a3cf40d708a23d5830eacd36236888d8c0cf04cf6d7d0d8ca8d3121d8a111fc8"} Jan 21 10:32:11 crc kubenswrapper[4684]: I0121 10:32:11.352547 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerStarted","Data":"3ac422399d7b4ce0219780b5ec26a0a61876c3cb042bf1ea4360546d232896eb"} Jan 21 10:32:11 crc kubenswrapper[4684]: I0121 10:32:11.373433 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wmnzf" podStartSLOduration=2.706867258 podStartE2EDuration="5.37341773s" podCreationTimestamp="2026-01-21 10:32:06 +0000 UTC" firstStartedPulling="2026-01-21 10:32:08.328517965 +0000 UTC m=+1566.086600972" lastFinishedPulling="2026-01-21 10:32:10.995068477 +0000 UTC m=+1568.753151444" observedRunningTime="2026-01-21 10:32:11.371553923 +0000 UTC m=+1569.129636920" watchObservedRunningTime="2026-01-21 10:32:11.37341773 +0000 UTC m=+1569.131500697" Jan 21 10:32:17 crc kubenswrapper[4684]: I0121 10:32:17.001095 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:17 crc kubenswrapper[4684]: I0121 10:32:17.001689 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:17 crc kubenswrapper[4684]: I0121 10:32:17.054978 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:17 crc kubenswrapper[4684]: I0121 10:32:17.441186 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:17 crc kubenswrapper[4684]: I0121 10:32:17.490106 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmnzf"] Jan 21 10:32:19 crc kubenswrapper[4684]: I0121 10:32:19.412731 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wmnzf" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="registry-server" containerID="cri-o://3ac422399d7b4ce0219780b5ec26a0a61876c3cb042bf1ea4360546d232896eb" gracePeriod=2 Jan 21 10:32:19 crc kubenswrapper[4684]: I0121 10:32:19.514497 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:32:19 crc kubenswrapper[4684]: E0121 10:32:19.514798 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.421830 4684 generic.go:334] "Generic (PLEG): container finished" podID="8302dd4d-f927-4216-803a-29d735045608" containerID="3ac422399d7b4ce0219780b5ec26a0a61876c3cb042bf1ea4360546d232896eb" exitCode=0 Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.422148 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerDied","Data":"3ac422399d7b4ce0219780b5ec26a0a61876c3cb042bf1ea4360546d232896eb"} Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.483403 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.647885 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-utilities\") pod \"8302dd4d-f927-4216-803a-29d735045608\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.648165 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-catalog-content\") pod \"8302dd4d-f927-4216-803a-29d735045608\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.648284 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d88xb\" (UniqueName: \"kubernetes.io/projected/8302dd4d-f927-4216-803a-29d735045608-kube-api-access-d88xb\") pod \"8302dd4d-f927-4216-803a-29d735045608\" (UID: \"8302dd4d-f927-4216-803a-29d735045608\") " Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.649078 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-utilities" (OuterVolumeSpecName: "utilities") pod "8302dd4d-f927-4216-803a-29d735045608" (UID: "8302dd4d-f927-4216-803a-29d735045608"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.658712 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8302dd4d-f927-4216-803a-29d735045608-kube-api-access-d88xb" (OuterVolumeSpecName: "kube-api-access-d88xb") pod "8302dd4d-f927-4216-803a-29d735045608" (UID: "8302dd4d-f927-4216-803a-29d735045608"). InnerVolumeSpecName "kube-api-access-d88xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.704503 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8302dd4d-f927-4216-803a-29d735045608" (UID: "8302dd4d-f927-4216-803a-29d735045608"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.750020 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d88xb\" (UniqueName: \"kubernetes.io/projected/8302dd4d-f927-4216-803a-29d735045608-kube-api-access-d88xb\") on node \"crc\" DevicePath \"\"" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.750080 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:32:20 crc kubenswrapper[4684]: I0121 10:32:20.750105 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8302dd4d-f927-4216-803a-29d735045608-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.435870 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmnzf" event={"ID":"8302dd4d-f927-4216-803a-29d735045608","Type":"ContainerDied","Data":"ec069724cf22d21f12900468c164cd34dfdc77babbcbc98891f86464a1444cb3"} Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.435948 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmnzf" Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.436791 4684 scope.go:117] "RemoveContainer" containerID="3ac422399d7b4ce0219780b5ec26a0a61876c3cb042bf1ea4360546d232896eb" Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.455301 4684 scope.go:117] "RemoveContainer" containerID="a3cf40d708a23d5830eacd36236888d8c0cf04cf6d7d0d8ca8d3121d8a111fc8" Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.472526 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmnzf"] Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.478099 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wmnzf"] Jan 21 10:32:21 crc kubenswrapper[4684]: I0121 10:32:21.491439 4684 scope.go:117] "RemoveContainer" containerID="cfd7ce3e880133a8fb4a5fc0d74d0946c4c3c452c27de464cd33a8fcebd984d7" Jan 21 10:32:22 crc kubenswrapper[4684]: I0121 10:32:22.521991 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8302dd4d-f927-4216-803a-29d735045608" path="/var/lib/kubelet/pods/8302dd4d-f927-4216-803a-29d735045608/volumes" Jan 21 10:32:31 crc kubenswrapper[4684]: I0121 10:32:31.514585 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:32:31 crc kubenswrapper[4684]: E0121 10:32:31.515311 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:32:44 crc kubenswrapper[4684]: I0121 10:32:44.515124 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:32:44 crc kubenswrapper[4684]: E0121 10:32:44.516200 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:32:59 crc kubenswrapper[4684]: I0121 10:32:59.516057 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:32:59 crc kubenswrapper[4684]: E0121 10:32:59.517109 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:33:13 crc kubenswrapper[4684]: I0121 10:33:13.514235 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:33:13 crc kubenswrapper[4684]: E0121 10:33:13.516268 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:33:24 crc kubenswrapper[4684]: I0121 10:33:24.514974 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:33:24 crc kubenswrapper[4684]: E0121 10:33:24.515745 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:33:39 crc kubenswrapper[4684]: I0121 10:33:39.514800 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:33:39 crc kubenswrapper[4684]: E0121 10:33:39.515902 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:33:52 crc kubenswrapper[4684]: I0121 10:33:52.520929 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:33:52 crc kubenswrapper[4684]: E0121 10:33:52.522024 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:34:06 crc kubenswrapper[4684]: I0121 10:34:06.515212 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:34:06 crc kubenswrapper[4684]: E0121 10:34:06.516123 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:34:18 crc kubenswrapper[4684]: I0121 10:34:18.515456 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:34:18 crc kubenswrapper[4684]: E0121 10:34:18.520937 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:34:32 crc kubenswrapper[4684]: I0121 10:34:32.519316 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:34:32 crc kubenswrapper[4684]: E0121 10:34:32.520390 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:34:44 crc kubenswrapper[4684]: I0121 10:34:44.514694 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:34:44 crc kubenswrapper[4684]: E0121 10:34:44.515476 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:34:57 crc kubenswrapper[4684]: I0121 10:34:57.514839 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:34:57 crc kubenswrapper[4684]: E0121 10:34:57.516841 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:35:11 crc kubenswrapper[4684]: I0121 10:35:11.514504 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:35:11 crc kubenswrapper[4684]: E0121 10:35:11.515259 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:35:26 crc kubenswrapper[4684]: I0121 10:35:26.519884 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:35:26 crc kubenswrapper[4684]: E0121 10:35:26.520694 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:35:40 crc kubenswrapper[4684]: I0121 10:35:40.514677 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:35:40 crc kubenswrapper[4684]: E0121 10:35:40.515483 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:35:53 crc kubenswrapper[4684]: I0121 10:35:53.514490 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:35:53 crc kubenswrapper[4684]: E0121 10:35:53.515521 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:36:04 crc kubenswrapper[4684]: I0121 10:36:04.514755 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:36:04 crc kubenswrapper[4684]: E0121 10:36:04.515550 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:36:19 crc kubenswrapper[4684]: I0121 10:36:19.514621 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:36:19 crc kubenswrapper[4684]: E0121 10:36:19.515437 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:36:30 crc kubenswrapper[4684]: I0121 10:36:30.514744 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:36:30 crc kubenswrapper[4684]: E0121 10:36:30.515530 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.894536 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-kh4fv"] Jan 21 10:36:34 crc kubenswrapper[4684]: E0121 10:36:34.895213 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="extract-utilities" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.895233 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="extract-utilities" Jan 21 10:36:34 crc kubenswrapper[4684]: E0121 10:36:34.895249 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="registry-server" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.895261 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="registry-server" Jan 21 10:36:34 crc kubenswrapper[4684]: E0121 10:36:34.895294 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="extract-content" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.895306 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="extract-content" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.895527 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="8302dd4d-f927-4216-803a-29d735045608" containerName="registry-server" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.896199 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.912445 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kh4fv"] Jan 21 10:36:34 crc kubenswrapper[4684]: I0121 10:36:34.946706 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpm9t\" (UniqueName: \"kubernetes.io/projected/b3939a2a-7e77-4cba-87c1-7aa82ab35176-kube-api-access-dpm9t\") pod \"infrawatch-operators-kh4fv\" (UID: \"b3939a2a-7e77-4cba-87c1-7aa82ab35176\") " pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:35 crc kubenswrapper[4684]: I0121 10:36:35.047766 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpm9t\" (UniqueName: \"kubernetes.io/projected/b3939a2a-7e77-4cba-87c1-7aa82ab35176-kube-api-access-dpm9t\") pod \"infrawatch-operators-kh4fv\" (UID: \"b3939a2a-7e77-4cba-87c1-7aa82ab35176\") " pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:35 crc kubenswrapper[4684]: I0121 10:36:35.079439 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpm9t\" (UniqueName: \"kubernetes.io/projected/b3939a2a-7e77-4cba-87c1-7aa82ab35176-kube-api-access-dpm9t\") pod \"infrawatch-operators-kh4fv\" (UID: \"b3939a2a-7e77-4cba-87c1-7aa82ab35176\") " pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:35 crc kubenswrapper[4684]: I0121 10:36:35.215440 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:35 crc kubenswrapper[4684]: I0121 10:36:35.457434 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kh4fv"] Jan 21 10:36:35 crc kubenswrapper[4684]: I0121 10:36:35.469606 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:36:35 crc kubenswrapper[4684]: I0121 10:36:35.797516 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kh4fv" event={"ID":"b3939a2a-7e77-4cba-87c1-7aa82ab35176","Type":"ContainerStarted","Data":"841b9c38b76782d5770109220933ac8b19f56234f548c4c1129930a1d5909af0"} Jan 21 10:36:36 crc kubenswrapper[4684]: I0121 10:36:36.807904 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kh4fv" event={"ID":"b3939a2a-7e77-4cba-87c1-7aa82ab35176","Type":"ContainerStarted","Data":"133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8"} Jan 21 10:36:44 crc kubenswrapper[4684]: I0121 10:36:44.514242 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:36:44 crc kubenswrapper[4684]: I0121 10:36:44.871862 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"2e44b213bd94f3fcb7f66806990f0c2b0fa0b300b8664a99a36aebc9225a8bc1"} Jan 21 10:36:44 crc kubenswrapper[4684]: I0121 10:36:44.891921 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-kh4fv" podStartSLOduration=10.437111976 podStartE2EDuration="10.891890895s" podCreationTimestamp="2026-01-21 10:36:34 +0000 UTC" firstStartedPulling="2026-01-21 10:36:35.468823376 +0000 UTC m=+1833.226906343" lastFinishedPulling="2026-01-21 10:36:35.923602275 +0000 UTC m=+1833.681685262" observedRunningTime="2026-01-21 10:36:36.828615473 +0000 UTC m=+1834.586698450" watchObservedRunningTime="2026-01-21 10:36:44.891890895 +0000 UTC m=+1842.649973902" Jan 21 10:36:45 crc kubenswrapper[4684]: I0121 10:36:45.216304 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:45 crc kubenswrapper[4684]: I0121 10:36:45.217903 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:45 crc kubenswrapper[4684]: I0121 10:36:45.261668 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:45 crc kubenswrapper[4684]: I0121 10:36:45.904192 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:45 crc kubenswrapper[4684]: I0121 10:36:45.946650 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kh4fv"] Jan 21 10:36:47 crc kubenswrapper[4684]: I0121 10:36:47.894833 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-kh4fv" podUID="b3939a2a-7e77-4cba-87c1-7aa82ab35176" containerName="registry-server" containerID="cri-o://133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8" gracePeriod=2 Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.333248 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.472146 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpm9t\" (UniqueName: \"kubernetes.io/projected/b3939a2a-7e77-4cba-87c1-7aa82ab35176-kube-api-access-dpm9t\") pod \"b3939a2a-7e77-4cba-87c1-7aa82ab35176\" (UID: \"b3939a2a-7e77-4cba-87c1-7aa82ab35176\") " Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.484015 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3939a2a-7e77-4cba-87c1-7aa82ab35176-kube-api-access-dpm9t" (OuterVolumeSpecName: "kube-api-access-dpm9t") pod "b3939a2a-7e77-4cba-87c1-7aa82ab35176" (UID: "b3939a2a-7e77-4cba-87c1-7aa82ab35176"). InnerVolumeSpecName "kube-api-access-dpm9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.573638 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpm9t\" (UniqueName: \"kubernetes.io/projected/b3939a2a-7e77-4cba-87c1-7aa82ab35176-kube-api-access-dpm9t\") on node \"crc\" DevicePath \"\"" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.902714 4684 generic.go:334] "Generic (PLEG): container finished" podID="b3939a2a-7e77-4cba-87c1-7aa82ab35176" containerID="133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8" exitCode=0 Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.902760 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kh4fv" event={"ID":"b3939a2a-7e77-4cba-87c1-7aa82ab35176","Type":"ContainerDied","Data":"133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8"} Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.902786 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kh4fv" event={"ID":"b3939a2a-7e77-4cba-87c1-7aa82ab35176","Type":"ContainerDied","Data":"841b9c38b76782d5770109220933ac8b19f56234f548c4c1129930a1d5909af0"} Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.902803 4684 scope.go:117] "RemoveContainer" containerID="133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.902906 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kh4fv" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.921825 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kh4fv"] Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.923906 4684 scope.go:117] "RemoveContainer" containerID="133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8" Jan 21 10:36:48 crc kubenswrapper[4684]: E0121 10:36:48.924484 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8\": container with ID starting with 133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8 not found: ID does not exist" containerID="133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.924528 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8"} err="failed to get container status \"133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8\": rpc error: code = NotFound desc = could not find container \"133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8\": container with ID starting with 133789e1da2da3833c724f1884fb0a77255dc067a200cc7a30107d80c5a5f6f8 not found: ID does not exist" Jan 21 10:36:48 crc kubenswrapper[4684]: I0121 10:36:48.927407 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-kh4fv"] Jan 21 10:36:50 crc kubenswrapper[4684]: I0121 10:36:50.527299 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3939a2a-7e77-4cba-87c1-7aa82ab35176" path="/var/lib/kubelet/pods/b3939a2a-7e77-4cba-87c1-7aa82ab35176/volumes" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.318546 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvmtb"] Jan 21 10:38:50 crc kubenswrapper[4684]: E0121 10:38:50.319861 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3939a2a-7e77-4cba-87c1-7aa82ab35176" containerName="registry-server" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.319892 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3939a2a-7e77-4cba-87c1-7aa82ab35176" containerName="registry-server" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.320146 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3939a2a-7e77-4cba-87c1-7aa82ab35176" containerName="registry-server" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.321957 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.328647 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvmtb"] Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.429537 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-utilities\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.429588 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-catalog-content\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.429624 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc497\" (UniqueName: \"kubernetes.io/projected/c09404b1-e73a-4db3-aefa-1d71729813f5-kube-api-access-hc497\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.531091 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-utilities\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.531142 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-catalog-content\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.531181 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc497\" (UniqueName: \"kubernetes.io/projected/c09404b1-e73a-4db3-aefa-1d71729813f5-kube-api-access-hc497\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.531708 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-catalog-content\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.531719 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-utilities\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.557122 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc497\" (UniqueName: \"kubernetes.io/projected/c09404b1-e73a-4db3-aefa-1d71729813f5-kube-api-access-hc497\") pod \"redhat-operators-rvmtb\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.654978 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:38:50 crc kubenswrapper[4684]: I0121 10:38:50.864435 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvmtb"] Jan 21 10:38:51 crc kubenswrapper[4684]: I0121 10:38:51.816377 4684 generic.go:334] "Generic (PLEG): container finished" podID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerID="3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3" exitCode=0 Jan 21 10:38:51 crc kubenswrapper[4684]: I0121 10:38:51.816432 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerDied","Data":"3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3"} Jan 21 10:38:51 crc kubenswrapper[4684]: I0121 10:38:51.816659 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerStarted","Data":"cf89abcd9404f13a9f72cd332295b3b377051d6c5a9e6aced0ba6f8c3f751b05"} Jan 21 10:38:55 crc kubenswrapper[4684]: I0121 10:38:55.856123 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerStarted","Data":"6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986"} Jan 21 10:38:56 crc kubenswrapper[4684]: I0121 10:38:56.865749 4684 generic.go:334] "Generic (PLEG): container finished" podID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerID="6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986" exitCode=0 Jan 21 10:38:56 crc kubenswrapper[4684]: I0121 10:38:56.865840 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerDied","Data":"6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986"} Jan 21 10:39:00 crc kubenswrapper[4684]: I0121 10:39:00.895061 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerStarted","Data":"b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf"} Jan 21 10:39:00 crc kubenswrapper[4684]: I0121 10:39:00.921566 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvmtb" podStartSLOduration=2.121250115 podStartE2EDuration="10.921547288s" podCreationTimestamp="2026-01-21 10:38:50 +0000 UTC" firstStartedPulling="2026-01-21 10:38:51.818830644 +0000 UTC m=+1969.576913611" lastFinishedPulling="2026-01-21 10:39:00.619127817 +0000 UTC m=+1978.377210784" observedRunningTime="2026-01-21 10:39:00.917577367 +0000 UTC m=+1978.675660344" watchObservedRunningTime="2026-01-21 10:39:00.921547288 +0000 UTC m=+1978.679630255" Jan 21 10:39:07 crc kubenswrapper[4684]: I0121 10:39:07.302746 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:39:07 crc kubenswrapper[4684]: I0121 10:39:07.303452 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:39:10 crc kubenswrapper[4684]: I0121 10:39:10.655401 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:39:10 crc kubenswrapper[4684]: I0121 10:39:10.655997 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:39:10 crc kubenswrapper[4684]: I0121 10:39:10.700761 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:39:11 crc kubenswrapper[4684]: I0121 10:39:11.026436 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:39:11 crc kubenswrapper[4684]: I0121 10:39:11.081406 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvmtb"] Jan 21 10:39:12 crc kubenswrapper[4684]: I0121 10:39:12.991511 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvmtb" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="registry-server" containerID="cri-o://b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf" gracePeriod=2 Jan 21 10:39:13 crc kubenswrapper[4684]: I0121 10:39:13.899286 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.005248 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-catalog-content\") pod \"c09404b1-e73a-4db3-aefa-1d71729813f5\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.005351 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc497\" (UniqueName: \"kubernetes.io/projected/c09404b1-e73a-4db3-aefa-1d71729813f5-kube-api-access-hc497\") pod \"c09404b1-e73a-4db3-aefa-1d71729813f5\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.005496 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-utilities\") pod \"c09404b1-e73a-4db3-aefa-1d71729813f5\" (UID: \"c09404b1-e73a-4db3-aefa-1d71729813f5\") " Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.007782 4684 generic.go:334] "Generic (PLEG): container finished" podID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerID="b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf" exitCode=0 Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.007838 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerDied","Data":"b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf"} Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.007875 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvmtb" event={"ID":"c09404b1-e73a-4db3-aefa-1d71729813f5","Type":"ContainerDied","Data":"cf89abcd9404f13a9f72cd332295b3b377051d6c5a9e6aced0ba6f8c3f751b05"} Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.007898 4684 scope.go:117] "RemoveContainer" containerID="b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.007955 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvmtb" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.008077 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-utilities" (OuterVolumeSpecName: "utilities") pod "c09404b1-e73a-4db3-aefa-1d71729813f5" (UID: "c09404b1-e73a-4db3-aefa-1d71729813f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.021820 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09404b1-e73a-4db3-aefa-1d71729813f5-kube-api-access-hc497" (OuterVolumeSpecName: "kube-api-access-hc497") pod "c09404b1-e73a-4db3-aefa-1d71729813f5" (UID: "c09404b1-e73a-4db3-aefa-1d71729813f5"). InnerVolumeSpecName "kube-api-access-hc497". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.062992 4684 scope.go:117] "RemoveContainer" containerID="6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.092005 4684 scope.go:117] "RemoveContainer" containerID="3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.113705 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc497\" (UniqueName: \"kubernetes.io/projected/c09404b1-e73a-4db3-aefa-1d71729813f5-kube-api-access-hc497\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.113741 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.118341 4684 scope.go:117] "RemoveContainer" containerID="b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf" Jan 21 10:39:14 crc kubenswrapper[4684]: E0121 10:39:14.119278 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf\": container with ID starting with b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf not found: ID does not exist" containerID="b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.119323 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf"} err="failed to get container status \"b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf\": rpc error: code = NotFound desc = could not find container \"b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf\": container with ID starting with b37e83d0e47adf57b3b2bb5fa1b6d3fd3d552dc5b3d6c4d74b91df9d96975dbf not found: ID does not exist" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.119370 4684 scope.go:117] "RemoveContainer" containerID="6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986" Jan 21 10:39:14 crc kubenswrapper[4684]: E0121 10:39:14.119739 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986\": container with ID starting with 6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986 not found: ID does not exist" containerID="6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.119803 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986"} err="failed to get container status \"6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986\": rpc error: code = NotFound desc = could not find container \"6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986\": container with ID starting with 6e0842ce2d8e61cd36c0b8110c380d7f97c7f7c4d9ae6b399372876f5273d986 not found: ID does not exist" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.119840 4684 scope.go:117] "RemoveContainer" containerID="3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3" Jan 21 10:39:14 crc kubenswrapper[4684]: E0121 10:39:14.120177 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3\": container with ID starting with 3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3 not found: ID does not exist" containerID="3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.120201 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3"} err="failed to get container status \"3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3\": rpc error: code = NotFound desc = could not find container \"3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3\": container with ID starting with 3c069c6b586c8e118de0ff8f39362c9bc2bc9bf07a49af9aa20fbbb3c9cb49c3 not found: ID does not exist" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.149148 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c09404b1-e73a-4db3-aefa-1d71729813f5" (UID: "c09404b1-e73a-4db3-aefa-1d71729813f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.215425 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c09404b1-e73a-4db3-aefa-1d71729813f5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.352460 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvmtb"] Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.359117 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvmtb"] Jan 21 10:39:14 crc kubenswrapper[4684]: I0121 10:39:14.525336 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" path="/var/lib/kubelet/pods/c09404b1-e73a-4db3-aefa-1d71729813f5/volumes" Jan 21 10:39:37 crc kubenswrapper[4684]: I0121 10:39:37.302610 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:39:37 crc kubenswrapper[4684]: I0121 10:39:37.303165 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.152083 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-shrfd"] Jan 21 10:39:59 crc kubenswrapper[4684]: E0121 10:39:59.152947 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="registry-server" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.152965 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="registry-server" Jan 21 10:39:59 crc kubenswrapper[4684]: E0121 10:39:59.152981 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="extract-utilities" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.152989 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="extract-utilities" Jan 21 10:39:59 crc kubenswrapper[4684]: E0121 10:39:59.153015 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="extract-content" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.153026 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="extract-content" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.153177 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09404b1-e73a-4db3-aefa-1d71729813f5" containerName="registry-server" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.154284 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.162106 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shrfd"] Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.213412 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-catalog-content\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.213506 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-utilities\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.213538 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklm5\" (UniqueName: \"kubernetes.io/projected/e6826f57-513b-4912-b8f3-630344824bc0-kube-api-access-gklm5\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.315115 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-catalog-content\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.315188 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-utilities\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.315225 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gklm5\" (UniqueName: \"kubernetes.io/projected/e6826f57-513b-4912-b8f3-630344824bc0-kube-api-access-gklm5\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.316094 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-catalog-content\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.316354 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-utilities\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.335392 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gklm5\" (UniqueName: \"kubernetes.io/projected/e6826f57-513b-4912-b8f3-630344824bc0-kube-api-access-gklm5\") pod \"certified-operators-shrfd\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.473975 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:39:59 crc kubenswrapper[4684]: I0121 10:39:59.774682 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-shrfd"] Jan 21 10:40:00 crc kubenswrapper[4684]: I0121 10:40:00.335607 4684 generic.go:334] "Generic (PLEG): container finished" podID="e6826f57-513b-4912-b8f3-630344824bc0" containerID="56f63b4717e3b3b28c601c1adc1cc6bb33a3a18bb97b1b639967f42192c07251" exitCode=0 Jan 21 10:40:00 crc kubenswrapper[4684]: I0121 10:40:00.335675 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shrfd" event={"ID":"e6826f57-513b-4912-b8f3-630344824bc0","Type":"ContainerDied","Data":"56f63b4717e3b3b28c601c1adc1cc6bb33a3a18bb97b1b639967f42192c07251"} Jan 21 10:40:00 crc kubenswrapper[4684]: I0121 10:40:00.335720 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shrfd" event={"ID":"e6826f57-513b-4912-b8f3-630344824bc0","Type":"ContainerStarted","Data":"d2ce2048b5c3ba96b23413cc700d3b4b7ca11b35ee286ac88ffb138309892629"} Jan 21 10:40:03 crc kubenswrapper[4684]: I0121 10:40:03.366387 4684 generic.go:334] "Generic (PLEG): container finished" podID="e6826f57-513b-4912-b8f3-630344824bc0" containerID="a60eb614074988fefda98ea51d9b8b1506349dba1ba10b45607e94ab7c3baea3" exitCode=0 Jan 21 10:40:03 crc kubenswrapper[4684]: I0121 10:40:03.366522 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shrfd" event={"ID":"e6826f57-513b-4912-b8f3-630344824bc0","Type":"ContainerDied","Data":"a60eb614074988fefda98ea51d9b8b1506349dba1ba10b45607e94ab7c3baea3"} Jan 21 10:40:04 crc kubenswrapper[4684]: I0121 10:40:04.375704 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shrfd" event={"ID":"e6826f57-513b-4912-b8f3-630344824bc0","Type":"ContainerStarted","Data":"fb77050687d249e47e84f2d31a44bd2d31e20093507e10f0ad3fb89f911e5f20"} Jan 21 10:40:07 crc kubenswrapper[4684]: I0121 10:40:07.302288 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:40:07 crc kubenswrapper[4684]: I0121 10:40:07.302657 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:40:07 crc kubenswrapper[4684]: I0121 10:40:07.302707 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:40:07 crc kubenswrapper[4684]: I0121 10:40:07.303350 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e44b213bd94f3fcb7f66806990f0c2b0fa0b300b8664a99a36aebc9225a8bc1"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:40:07 crc kubenswrapper[4684]: I0121 10:40:07.303433 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://2e44b213bd94f3fcb7f66806990f0c2b0fa0b300b8664a99a36aebc9225a8bc1" gracePeriod=600 Jan 21 10:40:08 crc kubenswrapper[4684]: I0121 10:40:08.413941 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="2e44b213bd94f3fcb7f66806990f0c2b0fa0b300b8664a99a36aebc9225a8bc1" exitCode=0 Jan 21 10:40:08 crc kubenswrapper[4684]: I0121 10:40:08.413972 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"2e44b213bd94f3fcb7f66806990f0c2b0fa0b300b8664a99a36aebc9225a8bc1"} Jan 21 10:40:08 crc kubenswrapper[4684]: I0121 10:40:08.414909 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e"} Jan 21 10:40:08 crc kubenswrapper[4684]: I0121 10:40:08.414956 4684 scope.go:117] "RemoveContainer" containerID="df0b97e2271859ee95d0e6fe0e56bb03c909446093ed16d630b8869849584b56" Jan 21 10:40:08 crc kubenswrapper[4684]: I0121 10:40:08.433730 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-shrfd" podStartSLOduration=5.964139228 podStartE2EDuration="9.433709798s" podCreationTimestamp="2026-01-21 10:39:59 +0000 UTC" firstStartedPulling="2026-01-21 10:40:00.337242436 +0000 UTC m=+2038.095325403" lastFinishedPulling="2026-01-21 10:40:03.806813006 +0000 UTC m=+2041.564895973" observedRunningTime="2026-01-21 10:40:04.401771764 +0000 UTC m=+2042.159854781" watchObservedRunningTime="2026-01-21 10:40:08.433709798 +0000 UTC m=+2046.191792765" Jan 21 10:40:09 crc kubenswrapper[4684]: I0121 10:40:09.475038 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:40:09 crc kubenswrapper[4684]: I0121 10:40:09.475330 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:40:09 crc kubenswrapper[4684]: I0121 10:40:09.522485 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:40:10 crc kubenswrapper[4684]: I0121 10:40:10.475170 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:40:10 crc kubenswrapper[4684]: I0121 10:40:10.530404 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shrfd"] Jan 21 10:40:12 crc kubenswrapper[4684]: I0121 10:40:12.444660 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-shrfd" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="registry-server" containerID="cri-o://fb77050687d249e47e84f2d31a44bd2d31e20093507e10f0ad3fb89f911e5f20" gracePeriod=2 Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.452345 4684 generic.go:334] "Generic (PLEG): container finished" podID="e6826f57-513b-4912-b8f3-630344824bc0" containerID="fb77050687d249e47e84f2d31a44bd2d31e20093507e10f0ad3fb89f911e5f20" exitCode=0 Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.452424 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shrfd" event={"ID":"e6826f57-513b-4912-b8f3-630344824bc0","Type":"ContainerDied","Data":"fb77050687d249e47e84f2d31a44bd2d31e20093507e10f0ad3fb89f911e5f20"} Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.711079 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.739351 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gklm5\" (UniqueName: \"kubernetes.io/projected/e6826f57-513b-4912-b8f3-630344824bc0-kube-api-access-gklm5\") pod \"e6826f57-513b-4912-b8f3-630344824bc0\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.739627 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-catalog-content\") pod \"e6826f57-513b-4912-b8f3-630344824bc0\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.739686 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-utilities\") pod \"e6826f57-513b-4912-b8f3-630344824bc0\" (UID: \"e6826f57-513b-4912-b8f3-630344824bc0\") " Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.740813 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-utilities" (OuterVolumeSpecName: "utilities") pod "e6826f57-513b-4912-b8f3-630344824bc0" (UID: "e6826f57-513b-4912-b8f3-630344824bc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.744720 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6826f57-513b-4912-b8f3-630344824bc0-kube-api-access-gklm5" (OuterVolumeSpecName: "kube-api-access-gklm5") pod "e6826f57-513b-4912-b8f3-630344824bc0" (UID: "e6826f57-513b-4912-b8f3-630344824bc0"). InnerVolumeSpecName "kube-api-access-gklm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.796815 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6826f57-513b-4912-b8f3-630344824bc0" (UID: "e6826f57-513b-4912-b8f3-630344824bc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.845124 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.848580 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gklm5\" (UniqueName: \"kubernetes.io/projected/e6826f57-513b-4912-b8f3-630344824bc0-kube-api-access-gklm5\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:13 crc kubenswrapper[4684]: I0121 10:40:13.848638 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6826f57-513b-4912-b8f3-630344824bc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.461844 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-shrfd" Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.461775 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-shrfd" event={"ID":"e6826f57-513b-4912-b8f3-630344824bc0","Type":"ContainerDied","Data":"d2ce2048b5c3ba96b23413cc700d3b4b7ca11b35ee286ac88ffb138309892629"} Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.463214 4684 scope.go:117] "RemoveContainer" containerID="fb77050687d249e47e84f2d31a44bd2d31e20093507e10f0ad3fb89f911e5f20" Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.484146 4684 scope.go:117] "RemoveContainer" containerID="a60eb614074988fefda98ea51d9b8b1506349dba1ba10b45607e94ab7c3baea3" Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.506393 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-shrfd"] Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.510765 4684 scope.go:117] "RemoveContainer" containerID="56f63b4717e3b3b28c601c1adc1cc6bb33a3a18bb97b1b639967f42192c07251" Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.511511 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-shrfd"] Jan 21 10:40:14 crc kubenswrapper[4684]: I0121 10:40:14.527597 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6826f57-513b-4912-b8f3-630344824bc0" path="/var/lib/kubelet/pods/e6826f57-513b-4912-b8f3-630344824bc0/volumes" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.288304 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-kmlcc"] Jan 21 10:41:44 crc kubenswrapper[4684]: E0121 10:41:44.289308 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="registry-server" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.289325 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="registry-server" Jan 21 10:41:44 crc kubenswrapper[4684]: E0121 10:41:44.289350 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="extract-content" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.289379 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="extract-content" Jan 21 10:41:44 crc kubenswrapper[4684]: E0121 10:41:44.289399 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="extract-utilities" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.289409 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="extract-utilities" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.289571 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6826f57-513b-4912-b8f3-630344824bc0" containerName="registry-server" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.290150 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.298653 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kmlcc"] Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.370600 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nscs\" (UniqueName: \"kubernetes.io/projected/e00a7ddd-e936-48cb-9097-232eecdca5be-kube-api-access-5nscs\") pod \"infrawatch-operators-kmlcc\" (UID: \"e00a7ddd-e936-48cb-9097-232eecdca5be\") " pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.472082 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nscs\" (UniqueName: \"kubernetes.io/projected/e00a7ddd-e936-48cb-9097-232eecdca5be-kube-api-access-5nscs\") pod \"infrawatch-operators-kmlcc\" (UID: \"e00a7ddd-e936-48cb-9097-232eecdca5be\") " pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.500971 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nscs\" (UniqueName: \"kubernetes.io/projected/e00a7ddd-e936-48cb-9097-232eecdca5be-kube-api-access-5nscs\") pod \"infrawatch-operators-kmlcc\" (UID: \"e00a7ddd-e936-48cb-9097-232eecdca5be\") " pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:44 crc kubenswrapper[4684]: I0121 10:41:44.624050 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:45 crc kubenswrapper[4684]: I0121 10:41:45.067173 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kmlcc"] Jan 21 10:41:45 crc kubenswrapper[4684]: I0121 10:41:45.082571 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:41:45 crc kubenswrapper[4684]: I0121 10:41:45.118683 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kmlcc" event={"ID":"e00a7ddd-e936-48cb-9097-232eecdca5be","Type":"ContainerStarted","Data":"ee5cb848275c0cbc3397d9da057b25a9d01bde97c02138ebc6127e66b2cb4354"} Jan 21 10:41:46 crc kubenswrapper[4684]: I0121 10:41:46.126617 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kmlcc" event={"ID":"e00a7ddd-e936-48cb-9097-232eecdca5be","Type":"ContainerStarted","Data":"279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e"} Jan 21 10:41:46 crc kubenswrapper[4684]: I0121 10:41:46.140564 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-kmlcc" podStartSLOduration=1.694061458 podStartE2EDuration="2.140544657s" podCreationTimestamp="2026-01-21 10:41:44 +0000 UTC" firstStartedPulling="2026-01-21 10:41:45.082284187 +0000 UTC m=+2142.840367154" lastFinishedPulling="2026-01-21 10:41:45.528767386 +0000 UTC m=+2143.286850353" observedRunningTime="2026-01-21 10:41:46.139318109 +0000 UTC m=+2143.897401076" watchObservedRunningTime="2026-01-21 10:41:46.140544657 +0000 UTC m=+2143.898627624" Jan 21 10:41:54 crc kubenswrapper[4684]: I0121 10:41:54.624353 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:54 crc kubenswrapper[4684]: I0121 10:41:54.624931 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:54 crc kubenswrapper[4684]: I0121 10:41:54.652354 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:55 crc kubenswrapper[4684]: I0121 10:41:55.225826 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:55 crc kubenswrapper[4684]: I0121 10:41:55.293590 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kmlcc"] Jan 21 10:41:57 crc kubenswrapper[4684]: I0121 10:41:57.206569 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-kmlcc" podUID="e00a7ddd-e936-48cb-9097-232eecdca5be" containerName="registry-server" containerID="cri-o://279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e" gracePeriod=2 Jan 21 10:41:57 crc kubenswrapper[4684]: I0121 10:41:57.585001 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:57 crc kubenswrapper[4684]: I0121 10:41:57.775293 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nscs\" (UniqueName: \"kubernetes.io/projected/e00a7ddd-e936-48cb-9097-232eecdca5be-kube-api-access-5nscs\") pod \"e00a7ddd-e936-48cb-9097-232eecdca5be\" (UID: \"e00a7ddd-e936-48cb-9097-232eecdca5be\") " Jan 21 10:41:57 crc kubenswrapper[4684]: I0121 10:41:57.786562 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00a7ddd-e936-48cb-9097-232eecdca5be-kube-api-access-5nscs" (OuterVolumeSpecName: "kube-api-access-5nscs") pod "e00a7ddd-e936-48cb-9097-232eecdca5be" (UID: "e00a7ddd-e936-48cb-9097-232eecdca5be"). InnerVolumeSpecName "kube-api-access-5nscs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:41:57 crc kubenswrapper[4684]: I0121 10:41:57.877186 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nscs\" (UniqueName: \"kubernetes.io/projected/e00a7ddd-e936-48cb-9097-232eecdca5be-kube-api-access-5nscs\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.215575 4684 generic.go:334] "Generic (PLEG): container finished" podID="e00a7ddd-e936-48cb-9097-232eecdca5be" containerID="279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e" exitCode=0 Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.215628 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kmlcc" Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.215636 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kmlcc" event={"ID":"e00a7ddd-e936-48cb-9097-232eecdca5be","Type":"ContainerDied","Data":"279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e"} Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.215708 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kmlcc" event={"ID":"e00a7ddd-e936-48cb-9097-232eecdca5be","Type":"ContainerDied","Data":"ee5cb848275c0cbc3397d9da057b25a9d01bde97c02138ebc6127e66b2cb4354"} Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.215737 4684 scope.go:117] "RemoveContainer" containerID="279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e" Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.247238 4684 scope.go:117] "RemoveContainer" containerID="279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e" Jan 21 10:41:58 crc kubenswrapper[4684]: E0121 10:41:58.249719 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e\": container with ID starting with 279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e not found: ID does not exist" containerID="279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e" Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.249769 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e"} err="failed to get container status \"279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e\": rpc error: code = NotFound desc = could not find container \"279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e\": container with ID starting with 279ba8bc3ad6c9bc4f9c92c753d1797f86be8a8cbdf6d5e836cbec1c9a2a4f5e not found: ID does not exist" Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.252467 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kmlcc"] Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.260543 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-kmlcc"] Jan 21 10:41:58 crc kubenswrapper[4684]: I0121 10:41:58.525519 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00a7ddd-e936-48cb-9097-232eecdca5be" path="/var/lib/kubelet/pods/e00a7ddd-e936-48cb-9097-232eecdca5be/volumes" Jan 21 10:42:07 crc kubenswrapper[4684]: I0121 10:42:07.302998 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:42:07 crc kubenswrapper[4684]: I0121 10:42:07.304112 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.070717 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhgqh"] Jan 21 10:42:16 crc kubenswrapper[4684]: E0121 10:42:16.071627 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00a7ddd-e936-48cb-9097-232eecdca5be" containerName="registry-server" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.071641 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00a7ddd-e936-48cb-9097-232eecdca5be" containerName="registry-server" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.071833 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00a7ddd-e936-48cb-9097-232eecdca5be" containerName="registry-server" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.072733 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.086772 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhgqh"] Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.189934 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb4hp\" (UniqueName: \"kubernetes.io/projected/01b599ba-5fb0-4d41-90cb-54e41ff6633f-kube-api-access-zb4hp\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.190033 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-catalog-content\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.190208 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-utilities\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.291605 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-utilities\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.291717 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb4hp\" (UniqueName: \"kubernetes.io/projected/01b599ba-5fb0-4d41-90cb-54e41ff6633f-kube-api-access-zb4hp\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.291748 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-catalog-content\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.292325 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-catalog-content\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.292454 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-utilities\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.327982 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb4hp\" (UniqueName: \"kubernetes.io/projected/01b599ba-5fb0-4d41-90cb-54e41ff6633f-kube-api-access-zb4hp\") pod \"community-operators-bhgqh\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.398981 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:16 crc kubenswrapper[4684]: I0121 10:42:16.914643 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhgqh"] Jan 21 10:42:17 crc kubenswrapper[4684]: I0121 10:42:17.378762 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhgqh" event={"ID":"01b599ba-5fb0-4d41-90cb-54e41ff6633f","Type":"ContainerStarted","Data":"7598ca19731efb0acc370519b9d82c07dd6e5d9f5eb6d777ce9bc654abd3fbbe"} Jan 21 10:42:19 crc kubenswrapper[4684]: I0121 10:42:19.396853 4684 generic.go:334] "Generic (PLEG): container finished" podID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerID="c0e7f1abe0a8da42c64beb098dec56db4028090828a1edc424fb048d7b782af8" exitCode=0 Jan 21 10:42:19 crc kubenswrapper[4684]: I0121 10:42:19.396966 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhgqh" event={"ID":"01b599ba-5fb0-4d41-90cb-54e41ff6633f","Type":"ContainerDied","Data":"c0e7f1abe0a8da42c64beb098dec56db4028090828a1edc424fb048d7b782af8"} Jan 21 10:42:21 crc kubenswrapper[4684]: I0121 10:42:21.421900 4684 generic.go:334] "Generic (PLEG): container finished" podID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerID="0f068238664db5746674c256d6e67a38266cda7a8633c82dfac147227fce0f3c" exitCode=0 Jan 21 10:42:21 crc kubenswrapper[4684]: I0121 10:42:21.422262 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhgqh" event={"ID":"01b599ba-5fb0-4d41-90cb-54e41ff6633f","Type":"ContainerDied","Data":"0f068238664db5746674c256d6e67a38266cda7a8633c82dfac147227fce0f3c"} Jan 21 10:42:25 crc kubenswrapper[4684]: I0121 10:42:25.450565 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhgqh" event={"ID":"01b599ba-5fb0-4d41-90cb-54e41ff6633f","Type":"ContainerStarted","Data":"7d67a32a710e22f0af45d8b5b07d1f1b2732d9f29523a81275daf22f396e0c8e"} Jan 21 10:42:25 crc kubenswrapper[4684]: I0121 10:42:25.496028 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhgqh" podStartSLOduration=4.783316089 podStartE2EDuration="9.496005712s" podCreationTimestamp="2026-01-21 10:42:16 +0000 UTC" firstStartedPulling="2026-01-21 10:42:19.399777469 +0000 UTC m=+2177.157860436" lastFinishedPulling="2026-01-21 10:42:24.112467052 +0000 UTC m=+2181.870550059" observedRunningTime="2026-01-21 10:42:25.486101954 +0000 UTC m=+2183.244184921" watchObservedRunningTime="2026-01-21 10:42:25.496005712 +0000 UTC m=+2183.254088679" Jan 21 10:42:26 crc kubenswrapper[4684]: I0121 10:42:26.399262 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:26 crc kubenswrapper[4684]: I0121 10:42:26.399928 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:27 crc kubenswrapper[4684]: I0121 10:42:27.443693 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bhgqh" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="registry-server" probeResult="failure" output=< Jan 21 10:42:27 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:42:27 crc kubenswrapper[4684]: > Jan 21 10:42:36 crc kubenswrapper[4684]: I0121 10:42:36.448769 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:36 crc kubenswrapper[4684]: I0121 10:42:36.499870 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:36 crc kubenswrapper[4684]: I0121 10:42:36.696356 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhgqh"] Jan 21 10:42:37 crc kubenswrapper[4684]: I0121 10:42:37.302423 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:42:37 crc kubenswrapper[4684]: I0121 10:42:37.302527 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:42:37 crc kubenswrapper[4684]: I0121 10:42:37.537653 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bhgqh" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="registry-server" containerID="cri-o://7d67a32a710e22f0af45d8b5b07d1f1b2732d9f29523a81275daf22f396e0c8e" gracePeriod=2 Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.566677 4684 generic.go:334] "Generic (PLEG): container finished" podID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerID="7d67a32a710e22f0af45d8b5b07d1f1b2732d9f29523a81275daf22f396e0c8e" exitCode=0 Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.568052 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhgqh" event={"ID":"01b599ba-5fb0-4d41-90cb-54e41ff6633f","Type":"ContainerDied","Data":"7d67a32a710e22f0af45d8b5b07d1f1b2732d9f29523a81275daf22f396e0c8e"} Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.644675 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.710956 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-utilities\") pod \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.711114 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-catalog-content\") pod \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.711152 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb4hp\" (UniqueName: \"kubernetes.io/projected/01b599ba-5fb0-4d41-90cb-54e41ff6633f-kube-api-access-zb4hp\") pod \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\" (UID: \"01b599ba-5fb0-4d41-90cb-54e41ff6633f\") " Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.715150 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-utilities" (OuterVolumeSpecName: "utilities") pod "01b599ba-5fb0-4d41-90cb-54e41ff6633f" (UID: "01b599ba-5fb0-4d41-90cb-54e41ff6633f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.720725 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b599ba-5fb0-4d41-90cb-54e41ff6633f-kube-api-access-zb4hp" (OuterVolumeSpecName: "kube-api-access-zb4hp") pod "01b599ba-5fb0-4d41-90cb-54e41ff6633f" (UID: "01b599ba-5fb0-4d41-90cb-54e41ff6633f"). InnerVolumeSpecName "kube-api-access-zb4hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.770614 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01b599ba-5fb0-4d41-90cb-54e41ff6633f" (UID: "01b599ba-5fb0-4d41-90cb-54e41ff6633f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.813816 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.813859 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b599ba-5fb0-4d41-90cb-54e41ff6633f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:40 crc kubenswrapper[4684]: I0121 10:42:40.813875 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb4hp\" (UniqueName: \"kubernetes.io/projected/01b599ba-5fb0-4d41-90cb-54e41ff6633f-kube-api-access-zb4hp\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.581053 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhgqh" event={"ID":"01b599ba-5fb0-4d41-90cb-54e41ff6633f","Type":"ContainerDied","Data":"7598ca19731efb0acc370519b9d82c07dd6e5d9f5eb6d777ce9bc654abd3fbbe"} Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.581087 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhgqh" Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.581139 4684 scope.go:117] "RemoveContainer" containerID="7d67a32a710e22f0af45d8b5b07d1f1b2732d9f29523a81275daf22f396e0c8e" Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.623895 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhgqh"] Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.624231 4684 scope.go:117] "RemoveContainer" containerID="0f068238664db5746674c256d6e67a38266cda7a8633c82dfac147227fce0f3c" Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.635532 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bhgqh"] Jan 21 10:42:41 crc kubenswrapper[4684]: I0121 10:42:41.648667 4684 scope.go:117] "RemoveContainer" containerID="c0e7f1abe0a8da42c64beb098dec56db4028090828a1edc424fb048d7b782af8" Jan 21 10:42:42 crc kubenswrapper[4684]: I0121 10:42:42.523091 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" path="/var/lib/kubelet/pods/01b599ba-5fb0-4d41-90cb-54e41ff6633f/volumes" Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.302702 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.303281 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.303338 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.304027 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.304082 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" gracePeriod=600 Jan 21 10:43:07 crc kubenswrapper[4684]: E0121 10:43:07.436182 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.782092 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" exitCode=0 Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.782146 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e"} Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.782191 4684 scope.go:117] "RemoveContainer" containerID="2e44b213bd94f3fcb7f66806990f0c2b0fa0b300b8664a99a36aebc9225a8bc1" Jan 21 10:43:07 crc kubenswrapper[4684]: I0121 10:43:07.783956 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:43:07 crc kubenswrapper[4684]: E0121 10:43:07.784444 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:43:23 crc kubenswrapper[4684]: I0121 10:43:23.514673 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:43:23 crc kubenswrapper[4684]: E0121 10:43:23.515552 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:43:38 crc kubenswrapper[4684]: I0121 10:43:38.518008 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:43:38 crc kubenswrapper[4684]: E0121 10:43:38.518802 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:43:49 crc kubenswrapper[4684]: I0121 10:43:49.524790 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:43:49 crc kubenswrapper[4684]: E0121 10:43:49.528484 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:44:02 crc kubenswrapper[4684]: I0121 10:44:02.517710 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:44:02 crc kubenswrapper[4684]: E0121 10:44:02.518538 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:44:14 crc kubenswrapper[4684]: I0121 10:44:14.331438 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:44:14 crc kubenswrapper[4684]: E0121 10:44:14.332323 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:44:27 crc kubenswrapper[4684]: I0121 10:44:27.514911 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:44:27 crc kubenswrapper[4684]: E0121 10:44:27.515704 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:44:39 crc kubenswrapper[4684]: I0121 10:44:39.515081 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:44:39 crc kubenswrapper[4684]: E0121 10:44:39.516118 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:44:54 crc kubenswrapper[4684]: I0121 10:44:54.514452 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:44:54 crc kubenswrapper[4684]: E0121 10:44:54.515180 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.149599 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc"] Jan 21 10:45:00 crc kubenswrapper[4684]: E0121 10:45:00.150214 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="registry-server" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.150231 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="registry-server" Jan 21 10:45:00 crc kubenswrapper[4684]: E0121 10:45:00.150247 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="extract-utilities" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.150253 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="extract-utilities" Jan 21 10:45:00 crc kubenswrapper[4684]: E0121 10:45:00.150269 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="extract-content" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.150278 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="extract-content" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.150446 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b599ba-5fb0-4d41-90cb-54e41ff6633f" containerName="registry-server" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.150956 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.153181 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.153693 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.174796 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc"] Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.329001 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/0c809130-899d-472a-bec6-e6d415317622-kube-api-access-z76bt\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.329260 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c809130-899d-472a-bec6-e6d415317622-config-volume\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.329327 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c809130-899d-472a-bec6-e6d415317622-secret-volume\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.430588 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c809130-899d-472a-bec6-e6d415317622-config-volume\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.430643 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c809130-899d-472a-bec6-e6d415317622-secret-volume\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.430715 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/0c809130-899d-472a-bec6-e6d415317622-kube-api-access-z76bt\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.432198 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c809130-899d-472a-bec6-e6d415317622-config-volume\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.452577 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c809130-899d-472a-bec6-e6d415317622-secret-volume\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.453794 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/0c809130-899d-472a-bec6-e6d415317622-kube-api-access-z76bt\") pod \"collect-profiles-29483205-vf2zc\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.484606 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:00 crc kubenswrapper[4684]: I0121 10:45:00.739290 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc"] Jan 21 10:45:01 crc kubenswrapper[4684]: I0121 10:45:01.704481 4684 generic.go:334] "Generic (PLEG): container finished" podID="0c809130-899d-472a-bec6-e6d415317622" containerID="533559ff10c3b004517ad27262ed2cd914abb597fe48292338d1513019b46b42" exitCode=0 Jan 21 10:45:01 crc kubenswrapper[4684]: I0121 10:45:01.705570 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" event={"ID":"0c809130-899d-472a-bec6-e6d415317622","Type":"ContainerDied","Data":"533559ff10c3b004517ad27262ed2cd914abb597fe48292338d1513019b46b42"} Jan 21 10:45:01 crc kubenswrapper[4684]: I0121 10:45:01.706054 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" event={"ID":"0c809130-899d-472a-bec6-e6d415317622","Type":"ContainerStarted","Data":"a399213d5c12e2049d54f3511d8d2e39c0f040dd12466cebb002937d5f1089a2"} Jan 21 10:45:02 crc kubenswrapper[4684]: I0121 10:45:02.947743 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.065414 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c809130-899d-472a-bec6-e6d415317622-secret-volume\") pod \"0c809130-899d-472a-bec6-e6d415317622\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.065971 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c809130-899d-472a-bec6-e6d415317622-config-volume\") pod \"0c809130-899d-472a-bec6-e6d415317622\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.066003 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/0c809130-899d-472a-bec6-e6d415317622-kube-api-access-z76bt\") pod \"0c809130-899d-472a-bec6-e6d415317622\" (UID: \"0c809130-899d-472a-bec6-e6d415317622\") " Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.066580 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c809130-899d-472a-bec6-e6d415317622-config-volume" (OuterVolumeSpecName: "config-volume") pod "0c809130-899d-472a-bec6-e6d415317622" (UID: "0c809130-899d-472a-bec6-e6d415317622"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.070382 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c809130-899d-472a-bec6-e6d415317622-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0c809130-899d-472a-bec6-e6d415317622" (UID: "0c809130-899d-472a-bec6-e6d415317622"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.070496 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c809130-899d-472a-bec6-e6d415317622-kube-api-access-z76bt" (OuterVolumeSpecName: "kube-api-access-z76bt") pod "0c809130-899d-472a-bec6-e6d415317622" (UID: "0c809130-899d-472a-bec6-e6d415317622"). InnerVolumeSpecName "kube-api-access-z76bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.167279 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c809130-899d-472a-bec6-e6d415317622-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.167325 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0c809130-899d-472a-bec6-e6d415317622-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.167337 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z76bt\" (UniqueName: \"kubernetes.io/projected/0c809130-899d-472a-bec6-e6d415317622-kube-api-access-z76bt\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.718938 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" event={"ID":"0c809130-899d-472a-bec6-e6d415317622","Type":"ContainerDied","Data":"a399213d5c12e2049d54f3511d8d2e39c0f040dd12466cebb002937d5f1089a2"} Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.719205 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a399213d5c12e2049d54f3511d8d2e39c0f040dd12466cebb002937d5f1089a2" Jan 21 10:45:03 crc kubenswrapper[4684]: I0121 10:45:03.719147 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-vf2zc" Jan 21 10:45:04 crc kubenswrapper[4684]: I0121 10:45:04.020243 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb"] Jan 21 10:45:04 crc kubenswrapper[4684]: I0121 10:45:04.028016 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-9pprb"] Jan 21 10:45:04 crc kubenswrapper[4684]: I0121 10:45:04.524695 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34c69792-c774-40f9-a0de-d6f1a6f82714" path="/var/lib/kubelet/pods/34c69792-c774-40f9-a0de-d6f1a6f82714/volumes" Jan 21 10:45:06 crc kubenswrapper[4684]: I0121 10:45:06.514427 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:45:06 crc kubenswrapper[4684]: E0121 10:45:06.514750 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:45:07 crc kubenswrapper[4684]: I0121 10:45:07.368930 4684 scope.go:117] "RemoveContainer" containerID="392b7419a03d980ad1336dc178d203186f5ce7cd936f5cbb08d149a772961c37" Jan 21 10:45:18 crc kubenswrapper[4684]: I0121 10:45:18.517634 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:45:18 crc kubenswrapper[4684]: E0121 10:45:18.518417 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:45:29 crc kubenswrapper[4684]: I0121 10:45:29.515170 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:45:29 crc kubenswrapper[4684]: E0121 10:45:29.516111 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:45:41 crc kubenswrapper[4684]: I0121 10:45:41.514692 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:45:41 crc kubenswrapper[4684]: E0121 10:45:41.515383 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:45:53 crc kubenswrapper[4684]: I0121 10:45:53.518765 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:45:53 crc kubenswrapper[4684]: E0121 10:45:53.520204 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:46:05 crc kubenswrapper[4684]: I0121 10:46:05.514908 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:46:05 crc kubenswrapper[4684]: E0121 10:46:05.516959 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:46:17 crc kubenswrapper[4684]: I0121 10:46:17.515019 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:46:17 crc kubenswrapper[4684]: E0121 10:46:17.515860 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:46:29 crc kubenswrapper[4684]: I0121 10:46:29.515316 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:46:29 crc kubenswrapper[4684]: E0121 10:46:29.516191 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:46:41 crc kubenswrapper[4684]: I0121 10:46:41.514456 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:46:41 crc kubenswrapper[4684]: E0121 10:46:41.515465 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:46:56 crc kubenswrapper[4684]: I0121 10:46:56.519209 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:46:56 crc kubenswrapper[4684]: E0121 10:46:56.520042 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:46:57 crc kubenswrapper[4684]: I0121 10:46:57.942583 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-p24l5"] Jan 21 10:46:57 crc kubenswrapper[4684]: E0121 10:46:57.943190 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c809130-899d-472a-bec6-e6d415317622" containerName="collect-profiles" Jan 21 10:46:57 crc kubenswrapper[4684]: I0121 10:46:57.943204 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c809130-899d-472a-bec6-e6d415317622" containerName="collect-profiles" Jan 21 10:46:57 crc kubenswrapper[4684]: I0121 10:46:57.943380 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c809130-899d-472a-bec6-e6d415317622" containerName="collect-profiles" Jan 21 10:46:57 crc kubenswrapper[4684]: I0121 10:46:57.944010 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:46:57 crc kubenswrapper[4684]: I0121 10:46:57.969062 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-p24l5"] Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.126860 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb6qx\" (UniqueName: \"kubernetes.io/projected/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe-kube-api-access-zb6qx\") pod \"infrawatch-operators-p24l5\" (UID: \"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe\") " pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.228166 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb6qx\" (UniqueName: \"kubernetes.io/projected/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe-kube-api-access-zb6qx\") pod \"infrawatch-operators-p24l5\" (UID: \"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe\") " pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.249766 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb6qx\" (UniqueName: \"kubernetes.io/projected/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe-kube-api-access-zb6qx\") pod \"infrawatch-operators-p24l5\" (UID: \"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe\") " pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.304189 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.528294 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-p24l5"] Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.537869 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:46:58 crc kubenswrapper[4684]: I0121 10:46:58.597560 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-p24l5" event={"ID":"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe","Type":"ContainerStarted","Data":"1c0f44792f58dcac991ba35f4de23e0fe9393e342be714427232d0ecc9c2897f"} Jan 21 10:46:59 crc kubenswrapper[4684]: I0121 10:46:59.609339 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-p24l5" event={"ID":"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe","Type":"ContainerStarted","Data":"b22105cba3a0cf6e7679df468a4c9ec8a4dc851e53a0a4cd8c9d6bcd1b25ccaf"} Jan 21 10:46:59 crc kubenswrapper[4684]: I0121 10:46:59.640762 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-p24l5" podStartSLOduration=2.109611821 podStartE2EDuration="2.640737029s" podCreationTimestamp="2026-01-21 10:46:57 +0000 UTC" firstStartedPulling="2026-01-21 10:46:58.53762781 +0000 UTC m=+2456.295710777" lastFinishedPulling="2026-01-21 10:46:59.068753018 +0000 UTC m=+2456.826835985" observedRunningTime="2026-01-21 10:46:59.634340269 +0000 UTC m=+2457.392423276" watchObservedRunningTime="2026-01-21 10:46:59.640737029 +0000 UTC m=+2457.398820036" Jan 21 10:47:08 crc kubenswrapper[4684]: I0121 10:47:08.304335 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:47:08 crc kubenswrapper[4684]: I0121 10:47:08.304940 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:47:08 crc kubenswrapper[4684]: I0121 10:47:08.338009 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:47:08 crc kubenswrapper[4684]: I0121 10:47:08.693486 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:47:10 crc kubenswrapper[4684]: I0121 10:47:10.514209 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:47:10 crc kubenswrapper[4684]: E0121 10:47:10.514711 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:47:11 crc kubenswrapper[4684]: I0121 10:47:11.548056 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-p24l5"] Jan 21 10:47:11 crc kubenswrapper[4684]: I0121 10:47:11.548458 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-p24l5" podUID="3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" containerName="registry-server" containerID="cri-o://b22105cba3a0cf6e7679df468a4c9ec8a4dc851e53a0a4cd8c9d6bcd1b25ccaf" gracePeriod=2 Jan 21 10:47:11 crc kubenswrapper[4684]: I0121 10:47:11.692231 4684 generic.go:334] "Generic (PLEG): container finished" podID="3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" containerID="b22105cba3a0cf6e7679df468a4c9ec8a4dc851e53a0a4cd8c9d6bcd1b25ccaf" exitCode=0 Jan 21 10:47:11 crc kubenswrapper[4684]: I0121 10:47:11.692281 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-p24l5" event={"ID":"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe","Type":"ContainerDied","Data":"b22105cba3a0cf6e7679df468a4c9ec8a4dc851e53a0a4cd8c9d6bcd1b25ccaf"} Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.025127 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.045909 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb6qx\" (UniqueName: \"kubernetes.io/projected/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe-kube-api-access-zb6qx\") pod \"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe\" (UID: \"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe\") " Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.051839 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe-kube-api-access-zb6qx" (OuterVolumeSpecName: "kube-api-access-zb6qx") pod "3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" (UID: "3ecb4b1e-76ce-4975-9169-2d3f2d3734fe"). InnerVolumeSpecName "kube-api-access-zb6qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.147241 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb6qx\" (UniqueName: \"kubernetes.io/projected/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe-kube-api-access-zb6qx\") on node \"crc\" DevicePath \"\"" Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.699216 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-p24l5" event={"ID":"3ecb4b1e-76ce-4975-9169-2d3f2d3734fe","Type":"ContainerDied","Data":"1c0f44792f58dcac991ba35f4de23e0fe9393e342be714427232d0ecc9c2897f"} Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.699561 4684 scope.go:117] "RemoveContainer" containerID="b22105cba3a0cf6e7679df468a4c9ec8a4dc851e53a0a4cd8c9d6bcd1b25ccaf" Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.699461 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-p24l5" Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.722520 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-p24l5"] Jan 21 10:47:12 crc kubenswrapper[4684]: I0121 10:47:12.728582 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-p24l5"] Jan 21 10:47:14 crc kubenswrapper[4684]: I0121 10:47:14.525877 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" path="/var/lib/kubelet/pods/3ecb4b1e-76ce-4975-9169-2d3f2d3734fe/volumes" Jan 21 10:47:21 crc kubenswrapper[4684]: I0121 10:47:21.514222 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:47:21 crc kubenswrapper[4684]: E0121 10:47:21.515048 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:47:34 crc kubenswrapper[4684]: I0121 10:47:34.514467 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:47:34 crc kubenswrapper[4684]: E0121 10:47:34.515334 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:47:45 crc kubenswrapper[4684]: I0121 10:47:45.515496 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:47:45 crc kubenswrapper[4684]: E0121 10:47:45.516671 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:47:57 crc kubenswrapper[4684]: I0121 10:47:57.515286 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:47:57 crc kubenswrapper[4684]: E0121 10:47:57.516188 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:48:11 crc kubenswrapper[4684]: I0121 10:48:11.514867 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:48:12 crc kubenswrapper[4684]: I0121 10:48:12.151672 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"866c8599372c3bd89310f1e99b820fdd014225f6a396f33cc0166af41ed7a755"} Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.638528 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4t5j"] Jan 21 10:49:07 crc kubenswrapper[4684]: E0121 10:49:07.639317 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" containerName="registry-server" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.639329 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" containerName="registry-server" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.639488 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecb4b1e-76ce-4975-9169-2d3f2d3734fe" containerName="registry-server" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.640326 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.657884 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4t5j"] Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.709872 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2wtg\" (UniqueName: \"kubernetes.io/projected/ae10c885-2098-40ce-be49-b7e88dd4c95d-kube-api-access-k2wtg\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.709936 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-catalog-content\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.709974 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-utilities\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.810890 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2wtg\" (UniqueName: \"kubernetes.io/projected/ae10c885-2098-40ce-be49-b7e88dd4c95d-kube-api-access-k2wtg\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.810964 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-catalog-content\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.811008 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-utilities\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.811699 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-utilities\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.811914 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-catalog-content\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.832441 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2wtg\" (UniqueName: \"kubernetes.io/projected/ae10c885-2098-40ce-be49-b7e88dd4c95d-kube-api-access-k2wtg\") pod \"redhat-operators-x4t5j\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:07 crc kubenswrapper[4684]: I0121 10:49:07.958143 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:08 crc kubenswrapper[4684]: I0121 10:49:08.436116 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4t5j"] Jan 21 10:49:08 crc kubenswrapper[4684]: I0121 10:49:08.568522 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4t5j" event={"ID":"ae10c885-2098-40ce-be49-b7e88dd4c95d","Type":"ContainerStarted","Data":"62d7b876903de21241b76037068357109653ca207d7a8144b3b098eb08518593"} Jan 21 10:49:09 crc kubenswrapper[4684]: I0121 10:49:09.583156 4684 generic.go:334] "Generic (PLEG): container finished" podID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerID="38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa" exitCode=0 Jan 21 10:49:09 crc kubenswrapper[4684]: I0121 10:49:09.583504 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4t5j" event={"ID":"ae10c885-2098-40ce-be49-b7e88dd4c95d","Type":"ContainerDied","Data":"38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa"} Jan 21 10:49:11 crc kubenswrapper[4684]: I0121 10:49:11.601098 4684 generic.go:334] "Generic (PLEG): container finished" podID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerID="3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6" exitCode=0 Jan 21 10:49:11 crc kubenswrapper[4684]: I0121 10:49:11.601174 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4t5j" event={"ID":"ae10c885-2098-40ce-be49-b7e88dd4c95d","Type":"ContainerDied","Data":"3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6"} Jan 21 10:49:13 crc kubenswrapper[4684]: I0121 10:49:13.617622 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4t5j" event={"ID":"ae10c885-2098-40ce-be49-b7e88dd4c95d","Type":"ContainerStarted","Data":"7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71"} Jan 21 10:49:13 crc kubenswrapper[4684]: I0121 10:49:13.640053 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4t5j" podStartSLOduration=3.262392777 podStartE2EDuration="6.640036184s" podCreationTimestamp="2026-01-21 10:49:07 +0000 UTC" firstStartedPulling="2026-01-21 10:49:09.586748357 +0000 UTC m=+2587.344831354" lastFinishedPulling="2026-01-21 10:49:12.964391794 +0000 UTC m=+2590.722474761" observedRunningTime="2026-01-21 10:49:13.639334922 +0000 UTC m=+2591.397417899" watchObservedRunningTime="2026-01-21 10:49:13.640036184 +0000 UTC m=+2591.398119151" Jan 21 10:49:17 crc kubenswrapper[4684]: I0121 10:49:17.958557 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:17 crc kubenswrapper[4684]: I0121 10:49:17.959993 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:19 crc kubenswrapper[4684]: I0121 10:49:19.012610 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4t5j" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="registry-server" probeResult="failure" output=< Jan 21 10:49:19 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:49:19 crc kubenswrapper[4684]: > Jan 21 10:49:28 crc kubenswrapper[4684]: I0121 10:49:28.007750 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:28 crc kubenswrapper[4684]: I0121 10:49:28.057309 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:28 crc kubenswrapper[4684]: I0121 10:49:28.244805 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4t5j"] Jan 21 10:49:29 crc kubenswrapper[4684]: I0121 10:49:29.721469 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4t5j" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="registry-server" containerID="cri-o://7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71" gracePeriod=2 Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.097819 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.233058 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-catalog-content\") pod \"ae10c885-2098-40ce-be49-b7e88dd4c95d\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.233181 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-utilities\") pod \"ae10c885-2098-40ce-be49-b7e88dd4c95d\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.233202 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2wtg\" (UniqueName: \"kubernetes.io/projected/ae10c885-2098-40ce-be49-b7e88dd4c95d-kube-api-access-k2wtg\") pod \"ae10c885-2098-40ce-be49-b7e88dd4c95d\" (UID: \"ae10c885-2098-40ce-be49-b7e88dd4c95d\") " Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.234993 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-utilities" (OuterVolumeSpecName: "utilities") pod "ae10c885-2098-40ce-be49-b7e88dd4c95d" (UID: "ae10c885-2098-40ce-be49-b7e88dd4c95d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.243875 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae10c885-2098-40ce-be49-b7e88dd4c95d-kube-api-access-k2wtg" (OuterVolumeSpecName: "kube-api-access-k2wtg") pod "ae10c885-2098-40ce-be49-b7e88dd4c95d" (UID: "ae10c885-2098-40ce-be49-b7e88dd4c95d"). InnerVolumeSpecName "kube-api-access-k2wtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.335102 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.335149 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2wtg\" (UniqueName: \"kubernetes.io/projected/ae10c885-2098-40ce-be49-b7e88dd4c95d-kube-api-access-k2wtg\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.382160 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae10c885-2098-40ce-be49-b7e88dd4c95d" (UID: "ae10c885-2098-40ce-be49-b7e88dd4c95d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.436792 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae10c885-2098-40ce-be49-b7e88dd4c95d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.729782 4684 generic.go:334] "Generic (PLEG): container finished" podID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerID="7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71" exitCode=0 Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.729858 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4t5j" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.729858 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4t5j" event={"ID":"ae10c885-2098-40ce-be49-b7e88dd4c95d","Type":"ContainerDied","Data":"7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71"} Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.730000 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4t5j" event={"ID":"ae10c885-2098-40ce-be49-b7e88dd4c95d","Type":"ContainerDied","Data":"62d7b876903de21241b76037068357109653ca207d7a8144b3b098eb08518593"} Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.730027 4684 scope.go:117] "RemoveContainer" containerID="7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.749551 4684 scope.go:117] "RemoveContainer" containerID="3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.750903 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4t5j"] Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.768083 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4t5j"] Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.773611 4684 scope.go:117] "RemoveContainer" containerID="38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.794560 4684 scope.go:117] "RemoveContainer" containerID="7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71" Jan 21 10:49:30 crc kubenswrapper[4684]: E0121 10:49:30.795212 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71\": container with ID starting with 7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71 not found: ID does not exist" containerID="7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.795253 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71"} err="failed to get container status \"7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71\": rpc error: code = NotFound desc = could not find container \"7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71\": container with ID starting with 7441782b7d26a1dacff6b58d6685ee98d52700b7ecffbf4184f3d85313b31a71 not found: ID does not exist" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.795284 4684 scope.go:117] "RemoveContainer" containerID="3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6" Jan 21 10:49:30 crc kubenswrapper[4684]: E0121 10:49:30.795674 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6\": container with ID starting with 3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6 not found: ID does not exist" containerID="3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.795705 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6"} err="failed to get container status \"3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6\": rpc error: code = NotFound desc = could not find container \"3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6\": container with ID starting with 3215cbde628fca90537af61a5a3efd89bb4576e2f3adc1e617eb32f9d1db01c6 not found: ID does not exist" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.795724 4684 scope.go:117] "RemoveContainer" containerID="38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa" Jan 21 10:49:30 crc kubenswrapper[4684]: E0121 10:49:30.796041 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa\": container with ID starting with 38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa not found: ID does not exist" containerID="38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa" Jan 21 10:49:30 crc kubenswrapper[4684]: I0121 10:49:30.796067 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa"} err="failed to get container status \"38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa\": rpc error: code = NotFound desc = could not find container \"38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa\": container with ID starting with 38d350b3bafcb9432e10ab57d3e923ffb8692d418ff617b8ac95b087663bd9aa not found: ID does not exist" Jan 21 10:49:32 crc kubenswrapper[4684]: I0121 10:49:32.524073 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" path="/var/lib/kubelet/pods/ae10c885-2098-40ce-be49-b7e88dd4c95d/volumes" Jan 21 10:50:37 crc kubenswrapper[4684]: I0121 10:50:37.302849 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:50:37 crc kubenswrapper[4684]: I0121 10:50:37.303375 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.003902 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kr7xf"] Jan 21 10:50:43 crc kubenswrapper[4684]: E0121 10:50:43.004692 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="extract-content" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.004735 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="extract-content" Jan 21 10:50:43 crc kubenswrapper[4684]: E0121 10:50:43.004748 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="registry-server" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.004754 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="registry-server" Jan 21 10:50:43 crc kubenswrapper[4684]: E0121 10:50:43.004771 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="extract-utilities" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.004806 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="extract-utilities" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.005537 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae10c885-2098-40ce-be49-b7e88dd4c95d" containerName="registry-server" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.007431 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.021654 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kr7xf"] Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.177718 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-utilities\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.177782 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-catalog-content\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.177801 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txm9z\" (UniqueName: \"kubernetes.io/projected/f6add426-1a03-4ddf-9a93-70347e6f917f-kube-api-access-txm9z\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.279510 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-utilities\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.279637 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-catalog-content\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.279767 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txm9z\" (UniqueName: \"kubernetes.io/projected/f6add426-1a03-4ddf-9a93-70347e6f917f-kube-api-access-txm9z\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.280427 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-utilities\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.280495 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-catalog-content\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.298568 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txm9z\" (UniqueName: \"kubernetes.io/projected/f6add426-1a03-4ddf-9a93-70347e6f917f-kube-api-access-txm9z\") pod \"certified-operators-kr7xf\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.330256 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:43 crc kubenswrapper[4684]: I0121 10:50:43.607011 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kr7xf"] Jan 21 10:50:43 crc kubenswrapper[4684]: W0121 10:50:43.613060 4684 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6add426_1a03_4ddf_9a93_70347e6f917f.slice/crio-e110b399020674ca727a978feec9b1d4c8c9a2e530ba2c93135910ecd6256810 WatchSource:0}: Error finding container e110b399020674ca727a978feec9b1d4c8c9a2e530ba2c93135910ecd6256810: Status 404 returned error can't find the container with id e110b399020674ca727a978feec9b1d4c8c9a2e530ba2c93135910ecd6256810 Jan 21 10:50:44 crc kubenswrapper[4684]: I0121 10:50:44.239233 4684 generic.go:334] "Generic (PLEG): container finished" podID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerID="91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8" exitCode=0 Jan 21 10:50:44 crc kubenswrapper[4684]: I0121 10:50:44.239275 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerDied","Data":"91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8"} Jan 21 10:50:44 crc kubenswrapper[4684]: I0121 10:50:44.239302 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerStarted","Data":"e110b399020674ca727a978feec9b1d4c8c9a2e530ba2c93135910ecd6256810"} Jan 21 10:50:45 crc kubenswrapper[4684]: I0121 10:50:45.248674 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerStarted","Data":"067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf"} Jan 21 10:50:46 crc kubenswrapper[4684]: I0121 10:50:46.260328 4684 generic.go:334] "Generic (PLEG): container finished" podID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerID="067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf" exitCode=0 Jan 21 10:50:46 crc kubenswrapper[4684]: I0121 10:50:46.260481 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerDied","Data":"067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf"} Jan 21 10:50:47 crc kubenswrapper[4684]: I0121 10:50:47.268998 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerStarted","Data":"24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710"} Jan 21 10:50:47 crc kubenswrapper[4684]: I0121 10:50:47.296607 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kr7xf" podStartSLOduration=2.8634537890000002 podStartE2EDuration="5.296586241s" podCreationTimestamp="2026-01-21 10:50:42 +0000 UTC" firstStartedPulling="2026-01-21 10:50:44.241047132 +0000 UTC m=+2681.999130119" lastFinishedPulling="2026-01-21 10:50:46.674179604 +0000 UTC m=+2684.432262571" observedRunningTime="2026-01-21 10:50:47.292473363 +0000 UTC m=+2685.050556330" watchObservedRunningTime="2026-01-21 10:50:47.296586241 +0000 UTC m=+2685.054669208" Jan 21 10:50:53 crc kubenswrapper[4684]: I0121 10:50:53.331270 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:53 crc kubenswrapper[4684]: I0121 10:50:53.331942 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:53 crc kubenswrapper[4684]: I0121 10:50:53.399178 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:54 crc kubenswrapper[4684]: I0121 10:50:54.365832 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:54 crc kubenswrapper[4684]: I0121 10:50:54.416223 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kr7xf"] Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.350029 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kr7xf" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="registry-server" containerID="cri-o://24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710" gracePeriod=2 Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.778916 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.969199 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txm9z\" (UniqueName: \"kubernetes.io/projected/f6add426-1a03-4ddf-9a93-70347e6f917f-kube-api-access-txm9z\") pod \"f6add426-1a03-4ddf-9a93-70347e6f917f\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.969599 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-catalog-content\") pod \"f6add426-1a03-4ddf-9a93-70347e6f917f\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.969689 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-utilities\") pod \"f6add426-1a03-4ddf-9a93-70347e6f917f\" (UID: \"f6add426-1a03-4ddf-9a93-70347e6f917f\") " Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.970919 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-utilities" (OuterVolumeSpecName: "utilities") pod "f6add426-1a03-4ddf-9a93-70347e6f917f" (UID: "f6add426-1a03-4ddf-9a93-70347e6f917f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:50:56 crc kubenswrapper[4684]: I0121 10:50:56.977074 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6add426-1a03-4ddf-9a93-70347e6f917f-kube-api-access-txm9z" (OuterVolumeSpecName: "kube-api-access-txm9z") pod "f6add426-1a03-4ddf-9a93-70347e6f917f" (UID: "f6add426-1a03-4ddf-9a93-70347e6f917f"). InnerVolumeSpecName "kube-api-access-txm9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.022691 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6add426-1a03-4ddf-9a93-70347e6f917f" (UID: "f6add426-1a03-4ddf-9a93-70347e6f917f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.071165 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txm9z\" (UniqueName: \"kubernetes.io/projected/f6add426-1a03-4ddf-9a93-70347e6f917f-kube-api-access-txm9z\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.071215 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.071229 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6add426-1a03-4ddf-9a93-70347e6f917f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.359598 4684 generic.go:334] "Generic (PLEG): container finished" podID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerID="24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710" exitCode=0 Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.359639 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerDied","Data":"24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710"} Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.359667 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr7xf" event={"ID":"f6add426-1a03-4ddf-9a93-70347e6f917f","Type":"ContainerDied","Data":"e110b399020674ca727a978feec9b1d4c8c9a2e530ba2c93135910ecd6256810"} Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.359669 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr7xf" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.359685 4684 scope.go:117] "RemoveContainer" containerID="24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.380857 4684 scope.go:117] "RemoveContainer" containerID="067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.393404 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kr7xf"] Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.398400 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kr7xf"] Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.421917 4684 scope.go:117] "RemoveContainer" containerID="91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.437843 4684 scope.go:117] "RemoveContainer" containerID="24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710" Jan 21 10:50:57 crc kubenswrapper[4684]: E0121 10:50:57.439098 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710\": container with ID starting with 24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710 not found: ID does not exist" containerID="24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.439147 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710"} err="failed to get container status \"24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710\": rpc error: code = NotFound desc = could not find container \"24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710\": container with ID starting with 24564deb9915ebb54c2f9452ff4e4b993d613b55ef5a8e8fed2d96624c3bc710 not found: ID does not exist" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.439177 4684 scope.go:117] "RemoveContainer" containerID="067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf" Jan 21 10:50:57 crc kubenswrapper[4684]: E0121 10:50:57.439568 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf\": container with ID starting with 067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf not found: ID does not exist" containerID="067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.439617 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf"} err="failed to get container status \"067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf\": rpc error: code = NotFound desc = could not find container \"067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf\": container with ID starting with 067c0f2cffcd4bbf0799f0319cb6357128e19e779398ff2071cec944818a46bf not found: ID does not exist" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.439641 4684 scope.go:117] "RemoveContainer" containerID="91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8" Jan 21 10:50:57 crc kubenswrapper[4684]: E0121 10:50:57.439909 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8\": container with ID starting with 91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8 not found: ID does not exist" containerID="91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8" Jan 21 10:50:57 crc kubenswrapper[4684]: I0121 10:50:57.440475 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8"} err="failed to get container status \"91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8\": rpc error: code = NotFound desc = could not find container \"91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8\": container with ID starting with 91325496c6ce1a3faed31c03367febb22822db9443faafd2e2d3e8f7c4e6d7e8 not found: ID does not exist" Jan 21 10:50:58 crc kubenswrapper[4684]: I0121 10:50:58.525285 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" path="/var/lib/kubelet/pods/f6add426-1a03-4ddf-9a93-70347e6f917f/volumes" Jan 21 10:51:07 crc kubenswrapper[4684]: I0121 10:51:07.302308 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:51:07 crc kubenswrapper[4684]: I0121 10:51:07.303767 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.302188 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.302730 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.302868 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.303565 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"866c8599372c3bd89310f1e99b820fdd014225f6a396f33cc0166af41ed7a755"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.303662 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://866c8599372c3bd89310f1e99b820fdd014225f6a396f33cc0166af41ed7a755" gracePeriod=600 Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.701090 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="866c8599372c3bd89310f1e99b820fdd014225f6a396f33cc0166af41ed7a755" exitCode=0 Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.701226 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"866c8599372c3bd89310f1e99b820fdd014225f6a396f33cc0166af41ed7a755"} Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.701425 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778"} Jan 21 10:51:37 crc kubenswrapper[4684]: I0121 10:51:37.701447 4684 scope.go:117] "RemoveContainer" containerID="062d85d4dd293a66c5ea87b72d5ef4e17f0f11089742f21c2445dde53232d20e" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.631984 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-v7t8z"] Jan 21 10:52:45 crc kubenswrapper[4684]: E0121 10:52:45.632879 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="extract-content" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.632896 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="extract-content" Jan 21 10:52:45 crc kubenswrapper[4684]: E0121 10:52:45.632913 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="extract-utilities" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.632921 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="extract-utilities" Jan 21 10:52:45 crc kubenswrapper[4684]: E0121 10:52:45.632947 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="registry-server" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.632957 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="registry-server" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.633132 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6add426-1a03-4ddf-9a93-70347e6f917f" containerName="registry-server" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.633677 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.637512 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-v7t8z"] Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.720437 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed-kube-api-access-mx8vm\") pod \"infrawatch-operators-v7t8z\" (UID: \"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed\") " pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.822517 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed-kube-api-access-mx8vm\") pod \"infrawatch-operators-v7t8z\" (UID: \"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed\") " pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.850452 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed-kube-api-access-mx8vm\") pod \"infrawatch-operators-v7t8z\" (UID: \"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed\") " pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:45 crc kubenswrapper[4684]: I0121 10:52:45.960021 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:46 crc kubenswrapper[4684]: I0121 10:52:46.426255 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-v7t8z"] Jan 21 10:52:46 crc kubenswrapper[4684]: I0121 10:52:46.428795 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:52:47 crc kubenswrapper[4684]: I0121 10:52:47.212011 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-v7t8z" event={"ID":"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed","Type":"ContainerStarted","Data":"3d7d2445d5b77a67a6c29c637ba93268550a366a4c266c0592061d4fc7433d82"} Jan 21 10:52:48 crc kubenswrapper[4684]: I0121 10:52:48.221566 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-v7t8z" event={"ID":"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed","Type":"ContainerStarted","Data":"7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878"} Jan 21 10:52:48 crc kubenswrapper[4684]: I0121 10:52:48.240682 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-v7t8z" podStartSLOduration=2.175276904 podStartE2EDuration="3.240663235s" podCreationTimestamp="2026-01-21 10:52:45 +0000 UTC" firstStartedPulling="2026-01-21 10:52:46.428563246 +0000 UTC m=+2804.186646213" lastFinishedPulling="2026-01-21 10:52:47.493949577 +0000 UTC m=+2805.252032544" observedRunningTime="2026-01-21 10:52:48.234972327 +0000 UTC m=+2805.993055294" watchObservedRunningTime="2026-01-21 10:52:48.240663235 +0000 UTC m=+2805.998746202" Jan 21 10:52:55 crc kubenswrapper[4684]: I0121 10:52:55.960645 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:55 crc kubenswrapper[4684]: I0121 10:52:55.961307 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:55 crc kubenswrapper[4684]: I0121 10:52:55.989667 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:56 crc kubenswrapper[4684]: I0121 10:52:56.300514 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:56 crc kubenswrapper[4684]: I0121 10:52:56.394699 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-v7t8z"] Jan 21 10:52:58 crc kubenswrapper[4684]: I0121 10:52:58.289875 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-v7t8z" podUID="bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" containerName="registry-server" containerID="cri-o://7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878" gracePeriod=2 Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.205920 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.298925 4684 generic.go:334] "Generic (PLEG): container finished" podID="bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" containerID="7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878" exitCode=0 Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.299022 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-v7t8z" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.299051 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-v7t8z" event={"ID":"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed","Type":"ContainerDied","Data":"7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878"} Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.299139 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-v7t8z" event={"ID":"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed","Type":"ContainerDied","Data":"3d7d2445d5b77a67a6c29c637ba93268550a366a4c266c0592061d4fc7433d82"} Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.299164 4684 scope.go:117] "RemoveContainer" containerID="7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.317756 4684 scope.go:117] "RemoveContainer" containerID="7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878" Jan 21 10:52:59 crc kubenswrapper[4684]: E0121 10:52:59.318349 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878\": container with ID starting with 7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878 not found: ID does not exist" containerID="7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.318445 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878"} err="failed to get container status \"7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878\": rpc error: code = NotFound desc = could not find container \"7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878\": container with ID starting with 7b82717479e9bd5ad47bd7e3b3cbbf69c104415de4d81ddf3d7aafed8203a878 not found: ID does not exist" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.405547 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed-kube-api-access-mx8vm\") pod \"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed\" (UID: \"bd609c16-f2a9-48bf-b7a0-4c57d9c406ed\") " Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.412386 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed-kube-api-access-mx8vm" (OuterVolumeSpecName: "kube-api-access-mx8vm") pod "bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" (UID: "bd609c16-f2a9-48bf-b7a0-4c57d9c406ed"). InnerVolumeSpecName "kube-api-access-mx8vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.507039 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx8vm\" (UniqueName: \"kubernetes.io/projected/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed-kube-api-access-mx8vm\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.630405 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-v7t8z"] Jan 21 10:52:59 crc kubenswrapper[4684]: I0121 10:52:59.636240 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-v7t8z"] Jan 21 10:53:00 crc kubenswrapper[4684]: I0121 10:53:00.526962 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" path="/var/lib/kubelet/pods/bd609c16-f2a9-48bf-b7a0-4c57d9c406ed/volumes" Jan 21 10:53:37 crc kubenswrapper[4684]: I0121 10:53:37.303154 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:53:37 crc kubenswrapper[4684]: I0121 10:53:37.303696 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.005793 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmmqd"] Jan 21 10:53:42 crc kubenswrapper[4684]: E0121 10:53:42.006607 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" containerName="registry-server" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.006621 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" containerName="registry-server" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.006753 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd609c16-f2a9-48bf-b7a0-4c57d9c406ed" containerName="registry-server" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.007633 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.031633 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmmqd"] Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.142959 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-utilities\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.143019 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfh9\" (UniqueName: \"kubernetes.io/projected/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-kube-api-access-6gfh9\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.143054 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-catalog-content\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.244467 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-utilities\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.244805 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gfh9\" (UniqueName: \"kubernetes.io/projected/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-kube-api-access-6gfh9\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.244926 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-catalog-content\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.244968 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-utilities\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.245251 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-catalog-content\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.267886 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gfh9\" (UniqueName: \"kubernetes.io/projected/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-kube-api-access-6gfh9\") pod \"community-operators-nmmqd\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:42 crc kubenswrapper[4684]: I0121 10:53:42.333611 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:53:45 crc kubenswrapper[4684]: I0121 10:53:42.647373 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmmqd"] Jan 21 10:53:45 crc kubenswrapper[4684]: I0121 10:53:43.610989 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerStarted","Data":"d87a4c44a8a042aa048531eb0b392a301e87ac4a68ba822a61539ac5babe3a14"} Jan 21 10:53:45 crc kubenswrapper[4684]: I0121 10:53:43.611051 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerStarted","Data":"cb15d03583c874536cbe97883daddf8964aec2f543ab417a0d1562ad2e6bb790"} Jan 21 10:53:45 crc kubenswrapper[4684]: I0121 10:53:44.620147 4684 generic.go:334] "Generic (PLEG): container finished" podID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerID="d87a4c44a8a042aa048531eb0b392a301e87ac4a68ba822a61539ac5babe3a14" exitCode=0 Jan 21 10:53:45 crc kubenswrapper[4684]: I0121 10:53:44.620232 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerDied","Data":"d87a4c44a8a042aa048531eb0b392a301e87ac4a68ba822a61539ac5babe3a14"} Jan 21 10:53:50 crc kubenswrapper[4684]: I0121 10:53:50.662641 4684 generic.go:334] "Generic (PLEG): container finished" podID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerID="e86812845f7de6de1082c8c1f6256ede17c4f931578c33d0690ca3c1f9631092" exitCode=0 Jan 21 10:53:50 crc kubenswrapper[4684]: I0121 10:53:50.662782 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerDied","Data":"e86812845f7de6de1082c8c1f6256ede17c4f931578c33d0690ca3c1f9631092"} Jan 21 10:53:52 crc kubenswrapper[4684]: I0121 10:53:52.680100 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerStarted","Data":"0f1bfbba81862e1816dbcacf704218001d26e4f638c22d086d60cd0b37e49003"} Jan 21 10:53:53 crc kubenswrapper[4684]: I0121 10:53:53.705061 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmmqd" podStartSLOduration=5.085548841 podStartE2EDuration="12.705028483s" podCreationTimestamp="2026-01-21 10:53:41 +0000 UTC" firstStartedPulling="2026-01-21 10:53:44.621959396 +0000 UTC m=+2862.380042363" lastFinishedPulling="2026-01-21 10:53:52.241439038 +0000 UTC m=+2869.999522005" observedRunningTime="2026-01-21 10:53:53.703772604 +0000 UTC m=+2871.461855571" watchObservedRunningTime="2026-01-21 10:53:53.705028483 +0000 UTC m=+2871.463111450" Jan 21 10:54:02 crc kubenswrapper[4684]: I0121 10:54:02.334181 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:54:02 crc kubenswrapper[4684]: I0121 10:54:02.334756 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:54:02 crc kubenswrapper[4684]: I0121 10:54:02.379002 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:54:02 crc kubenswrapper[4684]: I0121 10:54:02.788409 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:54:02 crc kubenswrapper[4684]: I0121 10:54:02.827676 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmmqd"] Jan 21 10:54:04 crc kubenswrapper[4684]: I0121 10:54:04.758004 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nmmqd" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="registry-server" containerID="cri-o://0f1bfbba81862e1816dbcacf704218001d26e4f638c22d086d60cd0b37e49003" gracePeriod=2 Jan 21 10:54:06 crc kubenswrapper[4684]: I0121 10:54:06.774081 4684 generic.go:334] "Generic (PLEG): container finished" podID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerID="0f1bfbba81862e1816dbcacf704218001d26e4f638c22d086d60cd0b37e49003" exitCode=0 Jan 21 10:54:06 crc kubenswrapper[4684]: I0121 10:54:06.774158 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerDied","Data":"0f1bfbba81862e1816dbcacf704218001d26e4f638c22d086d60cd0b37e49003"} Jan 21 10:54:06 crc kubenswrapper[4684]: I0121 10:54:06.987134 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.144470 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gfh9\" (UniqueName: \"kubernetes.io/projected/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-kube-api-access-6gfh9\") pod \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.144836 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-utilities\") pod \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.144917 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-catalog-content\") pod \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\" (UID: \"e1fa27c1-e898-44f6-bf7e-c6951933ad2e\") " Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.145639 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-utilities" (OuterVolumeSpecName: "utilities") pod "e1fa27c1-e898-44f6-bf7e-c6951933ad2e" (UID: "e1fa27c1-e898-44f6-bf7e-c6951933ad2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.149480 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-kube-api-access-6gfh9" (OuterVolumeSpecName: "kube-api-access-6gfh9") pod "e1fa27c1-e898-44f6-bf7e-c6951933ad2e" (UID: "e1fa27c1-e898-44f6-bf7e-c6951933ad2e"). InnerVolumeSpecName "kube-api-access-6gfh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.207014 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1fa27c1-e898-44f6-bf7e-c6951933ad2e" (UID: "e1fa27c1-e898-44f6-bf7e-c6951933ad2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.246617 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.246667 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.246688 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gfh9\" (UniqueName: \"kubernetes.io/projected/e1fa27c1-e898-44f6-bf7e-c6951933ad2e-kube-api-access-6gfh9\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.303124 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.303235 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.783684 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmmqd" event={"ID":"e1fa27c1-e898-44f6-bf7e-c6951933ad2e","Type":"ContainerDied","Data":"cb15d03583c874536cbe97883daddf8964aec2f543ab417a0d1562ad2e6bb790"} Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.783761 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmmqd" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.783763 4684 scope.go:117] "RemoveContainer" containerID="0f1bfbba81862e1816dbcacf704218001d26e4f638c22d086d60cd0b37e49003" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.801143 4684 scope.go:117] "RemoveContainer" containerID="e86812845f7de6de1082c8c1f6256ede17c4f931578c33d0690ca3c1f9631092" Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.822431 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmmqd"] Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.830973 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nmmqd"] Jan 21 10:54:07 crc kubenswrapper[4684]: I0121 10:54:07.832759 4684 scope.go:117] "RemoveContainer" containerID="d87a4c44a8a042aa048531eb0b392a301e87ac4a68ba822a61539ac5babe3a14" Jan 21 10:54:08 crc kubenswrapper[4684]: I0121 10:54:08.523514 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" path="/var/lib/kubelet/pods/e1fa27c1-e898-44f6-bf7e-c6951933ad2e/volumes" Jan 21 10:54:37 crc kubenswrapper[4684]: I0121 10:54:37.305992 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:54:37 crc kubenswrapper[4684]: I0121 10:54:37.308867 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:54:37 crc kubenswrapper[4684]: I0121 10:54:37.309005 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 10:54:37 crc kubenswrapper[4684]: I0121 10:54:37.309844 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:54:37 crc kubenswrapper[4684]: I0121 10:54:37.309979 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" gracePeriod=600 Jan 21 10:54:37 crc kubenswrapper[4684]: E0121 10:54:37.769196 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:54:38 crc kubenswrapper[4684]: I0121 10:54:38.058147 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" exitCode=0 Jan 21 10:54:38 crc kubenswrapper[4684]: I0121 10:54:38.058553 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778"} Jan 21 10:54:38 crc kubenswrapper[4684]: I0121 10:54:38.058663 4684 scope.go:117] "RemoveContainer" containerID="866c8599372c3bd89310f1e99b820fdd014225f6a396f33cc0166af41ed7a755" Jan 21 10:54:38 crc kubenswrapper[4684]: I0121 10:54:38.059301 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:54:38 crc kubenswrapper[4684]: E0121 10:54:38.059794 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:54:48 crc kubenswrapper[4684]: I0121 10:54:48.516559 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:54:48 crc kubenswrapper[4684]: E0121 10:54:48.517963 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:54:59 crc kubenswrapper[4684]: I0121 10:54:59.514450 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:54:59 crc kubenswrapper[4684]: E0121 10:54:59.515133 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:55:11 crc kubenswrapper[4684]: I0121 10:55:11.515192 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:55:11 crc kubenswrapper[4684]: E0121 10:55:11.516098 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:55:23 crc kubenswrapper[4684]: I0121 10:55:23.514304 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:55:23 crc kubenswrapper[4684]: E0121 10:55:23.516106 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:55:36 crc kubenswrapper[4684]: I0121 10:55:36.516731 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:55:36 crc kubenswrapper[4684]: E0121 10:55:36.521640 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:55:51 crc kubenswrapper[4684]: I0121 10:55:51.514394 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:55:51 crc kubenswrapper[4684]: E0121 10:55:51.515398 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:56:06 crc kubenswrapper[4684]: I0121 10:56:06.515159 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:56:06 crc kubenswrapper[4684]: E0121 10:56:06.516161 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:56:19 crc kubenswrapper[4684]: I0121 10:56:19.514917 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:56:19 crc kubenswrapper[4684]: E0121 10:56:19.515770 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:56:32 crc kubenswrapper[4684]: I0121 10:56:32.519550 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:56:32 crc kubenswrapper[4684]: E0121 10:56:32.520438 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:56:47 crc kubenswrapper[4684]: I0121 10:56:47.514925 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:56:47 crc kubenswrapper[4684]: E0121 10:56:47.515914 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:56:58 crc kubenswrapper[4684]: I0121 10:56:58.515112 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:56:58 crc kubenswrapper[4684]: E0121 10:56:58.515880 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:57:13 crc kubenswrapper[4684]: I0121 10:57:13.515145 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:57:13 crc kubenswrapper[4684]: E0121 10:57:13.515992 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:57:26 crc kubenswrapper[4684]: I0121 10:57:26.518561 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:57:26 crc kubenswrapper[4684]: E0121 10:57:26.519192 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:57:38 crc kubenswrapper[4684]: I0121 10:57:38.515244 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:57:38 crc kubenswrapper[4684]: E0121 10:57:38.517277 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:57:50 crc kubenswrapper[4684]: I0121 10:57:50.514680 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:57:50 crc kubenswrapper[4684]: E0121 10:57:50.515715 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:58:05 crc kubenswrapper[4684]: I0121 10:58:05.514356 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:58:05 crc kubenswrapper[4684]: E0121 10:58:05.515459 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:58:16 crc kubenswrapper[4684]: I0121 10:58:16.518105 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:58:16 crc kubenswrapper[4684]: E0121 10:58:16.518893 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.090401 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-9w8bz"] Jan 21 10:58:29 crc kubenswrapper[4684]: E0121 10:58:29.091328 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="extract-content" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.091346 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="extract-content" Jan 21 10:58:29 crc kubenswrapper[4684]: E0121 10:58:29.091386 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="registry-server" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.091395 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="registry-server" Jan 21 10:58:29 crc kubenswrapper[4684]: E0121 10:58:29.091412 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="extract-utilities" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.091421 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="extract-utilities" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.091580 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fa27c1-e898-44f6-bf7e-c6951933ad2e" containerName="registry-server" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.092130 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.103087 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9w8bz"] Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.205079 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzws\" (UniqueName: \"kubernetes.io/projected/be26266e-e179-4e5e-a57b-32846d79f042-kube-api-access-hgzws\") pod \"infrawatch-operators-9w8bz\" (UID: \"be26266e-e179-4e5e-a57b-32846d79f042\") " pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.307020 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzws\" (UniqueName: \"kubernetes.io/projected/be26266e-e179-4e5e-a57b-32846d79f042-kube-api-access-hgzws\") pod \"infrawatch-operators-9w8bz\" (UID: \"be26266e-e179-4e5e-a57b-32846d79f042\") " pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.325553 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzws\" (UniqueName: \"kubernetes.io/projected/be26266e-e179-4e5e-a57b-32846d79f042-kube-api-access-hgzws\") pod \"infrawatch-operators-9w8bz\" (UID: \"be26266e-e179-4e5e-a57b-32846d79f042\") " pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.415661 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.647928 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9w8bz"] Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.679120 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:58:29 crc kubenswrapper[4684]: I0121 10:58:29.746290 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9w8bz" event={"ID":"be26266e-e179-4e5e-a57b-32846d79f042","Type":"ContainerStarted","Data":"06ec28b3eb24bf06dcfedc1ebddb2d6aa42844095c643aee49a81010f20061e6"} Jan 21 10:58:30 crc kubenswrapper[4684]: I0121 10:58:30.514842 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:58:30 crc kubenswrapper[4684]: E0121 10:58:30.515131 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:58:30 crc kubenswrapper[4684]: I0121 10:58:30.755853 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9w8bz" event={"ID":"be26266e-e179-4e5e-a57b-32846d79f042","Type":"ContainerStarted","Data":"e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0"} Jan 21 10:58:31 crc kubenswrapper[4684]: I0121 10:58:31.787878 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-9w8bz" podStartSLOduration=1.8880956150000001 podStartE2EDuration="2.787846003s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="2026-01-21 10:58:29.678785461 +0000 UTC m=+3147.436868428" lastFinishedPulling="2026-01-21 10:58:30.578535849 +0000 UTC m=+3148.336618816" observedRunningTime="2026-01-21 10:58:31.784269111 +0000 UTC m=+3149.542352078" watchObservedRunningTime="2026-01-21 10:58:31.787846003 +0000 UTC m=+3149.545928970" Jan 21 10:58:39 crc kubenswrapper[4684]: I0121 10:58:39.415882 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:39 crc kubenswrapper[4684]: I0121 10:58:39.416529 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:39 crc kubenswrapper[4684]: I0121 10:58:39.447197 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:39 crc kubenswrapper[4684]: I0121 10:58:39.863631 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:39 crc kubenswrapper[4684]: I0121 10:58:39.923668 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9w8bz"] Jan 21 10:58:41 crc kubenswrapper[4684]: I0121 10:58:41.844664 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-9w8bz" podUID="be26266e-e179-4e5e-a57b-32846d79f042" containerName="registry-server" containerID="cri-o://e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0" gracePeriod=2 Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.768038 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.855261 4684 generic.go:334] "Generic (PLEG): container finished" podID="be26266e-e179-4e5e-a57b-32846d79f042" containerID="e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0" exitCode=0 Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.855312 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9w8bz" event={"ID":"be26266e-e179-4e5e-a57b-32846d79f042","Type":"ContainerDied","Data":"e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0"} Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.855370 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9w8bz" event={"ID":"be26266e-e179-4e5e-a57b-32846d79f042","Type":"ContainerDied","Data":"06ec28b3eb24bf06dcfedc1ebddb2d6aa42844095c643aee49a81010f20061e6"} Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.855428 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9w8bz" Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.855480 4684 scope.go:117] "RemoveContainer" containerID="e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0" Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.879969 4684 scope.go:117] "RemoveContainer" containerID="e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0" Jan 21 10:58:42 crc kubenswrapper[4684]: E0121 10:58:42.880538 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0\": container with ID starting with e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0 not found: ID does not exist" containerID="e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0" Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.880583 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0"} err="failed to get container status \"e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0\": rpc error: code = NotFound desc = could not find container \"e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0\": container with ID starting with e0859a65d3ed21d9539785c7b1760d166411d36024efd70fd641addeaa89c5f0 not found: ID does not exist" Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.887762 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzws\" (UniqueName: \"kubernetes.io/projected/be26266e-e179-4e5e-a57b-32846d79f042-kube-api-access-hgzws\") pod \"be26266e-e179-4e5e-a57b-32846d79f042\" (UID: \"be26266e-e179-4e5e-a57b-32846d79f042\") " Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.895451 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be26266e-e179-4e5e-a57b-32846d79f042-kube-api-access-hgzws" (OuterVolumeSpecName: "kube-api-access-hgzws") pod "be26266e-e179-4e5e-a57b-32846d79f042" (UID: "be26266e-e179-4e5e-a57b-32846d79f042"). InnerVolumeSpecName "kube-api-access-hgzws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:42 crc kubenswrapper[4684]: I0121 10:58:42.990260 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgzws\" (UniqueName: \"kubernetes.io/projected/be26266e-e179-4e5e-a57b-32846d79f042-kube-api-access-hgzws\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:43 crc kubenswrapper[4684]: I0121 10:58:43.189014 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9w8bz"] Jan 21 10:58:43 crc kubenswrapper[4684]: I0121 10:58:43.203606 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-9w8bz"] Jan 21 10:58:43 crc kubenswrapper[4684]: I0121 10:58:43.514888 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:58:43 crc kubenswrapper[4684]: E0121 10:58:43.515219 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:58:44 crc kubenswrapper[4684]: I0121 10:58:44.524769 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be26266e-e179-4e5e-a57b-32846d79f042" path="/var/lib/kubelet/pods/be26266e-e179-4e5e-a57b-32846d79f042/volumes" Jan 21 10:58:54 crc kubenswrapper[4684]: I0121 10:58:54.515232 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:58:54 crc kubenswrapper[4684]: E0121 10:58:54.516291 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:59:06 crc kubenswrapper[4684]: I0121 10:59:06.514151 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:59:06 crc kubenswrapper[4684]: E0121 10:59:06.514909 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.356818 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9svc"] Jan 21 10:59:10 crc kubenswrapper[4684]: E0121 10:59:10.357759 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be26266e-e179-4e5e-a57b-32846d79f042" containerName="registry-server" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.357775 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="be26266e-e179-4e5e-a57b-32846d79f042" containerName="registry-server" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.357922 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="be26266e-e179-4e5e-a57b-32846d79f042" containerName="registry-server" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.358953 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.364716 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9svc"] Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.523698 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-utilities\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.523986 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-catalog-content\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.524080 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvkrl\" (UniqueName: \"kubernetes.io/projected/46e02e5c-1835-4c35-bde3-653fcc7606b5-kube-api-access-qvkrl\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.625680 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-utilities\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.625781 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-catalog-content\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.625813 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvkrl\" (UniqueName: \"kubernetes.io/projected/46e02e5c-1835-4c35-bde3-653fcc7606b5-kube-api-access-qvkrl\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.626947 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-utilities\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.626973 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-catalog-content\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.646499 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvkrl\" (UniqueName: \"kubernetes.io/projected/46e02e5c-1835-4c35-bde3-653fcc7606b5-kube-api-access-qvkrl\") pod \"redhat-operators-f9svc\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.684852 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:10 crc kubenswrapper[4684]: I0121 10:59:10.909999 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9svc"] Jan 21 10:59:11 crc kubenswrapper[4684]: I0121 10:59:11.057181 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerStarted","Data":"15c12a7f09d81bc5c0156e4a32947a1dcca120a9043ce027272cfcb4dae02d0e"} Jan 21 10:59:12 crc kubenswrapper[4684]: I0121 10:59:12.067264 4684 generic.go:334] "Generic (PLEG): container finished" podID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerID="6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42" exitCode=0 Jan 21 10:59:12 crc kubenswrapper[4684]: I0121 10:59:12.067317 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerDied","Data":"6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42"} Jan 21 10:59:15 crc kubenswrapper[4684]: I0121 10:59:15.085798 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerStarted","Data":"67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb"} Jan 21 10:59:16 crc kubenswrapper[4684]: I0121 10:59:16.096143 4684 generic.go:334] "Generic (PLEG): container finished" podID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerID="67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb" exitCode=0 Jan 21 10:59:16 crc kubenswrapper[4684]: I0121 10:59:16.096191 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerDied","Data":"67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb"} Jan 21 10:59:17 crc kubenswrapper[4684]: I0121 10:59:17.106139 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerStarted","Data":"6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9"} Jan 21 10:59:17 crc kubenswrapper[4684]: I0121 10:59:17.515400 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:59:17 crc kubenswrapper[4684]: E0121 10:59:17.515675 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:59:18 crc kubenswrapper[4684]: I0121 10:59:18.130193 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9svc" podStartSLOduration=3.330381969 podStartE2EDuration="8.130175757s" podCreationTimestamp="2026-01-21 10:59:10 +0000 UTC" firstStartedPulling="2026-01-21 10:59:12.068972677 +0000 UTC m=+3189.827055644" lastFinishedPulling="2026-01-21 10:59:16.868766465 +0000 UTC m=+3194.626849432" observedRunningTime="2026-01-21 10:59:18.127424771 +0000 UTC m=+3195.885507738" watchObservedRunningTime="2026-01-21 10:59:18.130175757 +0000 UTC m=+3195.888258724" Jan 21 10:59:20 crc kubenswrapper[4684]: I0121 10:59:20.685852 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:20 crc kubenswrapper[4684]: I0121 10:59:20.686916 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:21 crc kubenswrapper[4684]: I0121 10:59:21.727690 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9svc" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="registry-server" probeResult="failure" output=< Jan 21 10:59:21 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 10:59:21 crc kubenswrapper[4684]: > Jan 21 10:59:30 crc kubenswrapper[4684]: I0121 10:59:30.728582 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:30 crc kubenswrapper[4684]: I0121 10:59:30.778501 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:30 crc kubenswrapper[4684]: I0121 10:59:30.962571 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9svc"] Jan 21 10:59:31 crc kubenswrapper[4684]: I0121 10:59:31.515456 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:59:31 crc kubenswrapper[4684]: E0121 10:59:31.515872 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 10:59:32 crc kubenswrapper[4684]: I0121 10:59:32.236304 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9svc" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="registry-server" containerID="cri-o://6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9" gracePeriod=2 Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.217420 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.254520 4684 generic.go:334] "Generic (PLEG): container finished" podID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerID="6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9" exitCode=0 Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.254560 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerDied","Data":"6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9"} Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.254585 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9svc" event={"ID":"46e02e5c-1835-4c35-bde3-653fcc7606b5","Type":"ContainerDied","Data":"15c12a7f09d81bc5c0156e4a32947a1dcca120a9043ce027272cfcb4dae02d0e"} Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.254602 4684 scope.go:117] "RemoveContainer" containerID="6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.254721 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9svc" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.283431 4684 scope.go:117] "RemoveContainer" containerID="67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.290339 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-utilities\") pod \"46e02e5c-1835-4c35-bde3-653fcc7606b5\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.290400 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvkrl\" (UniqueName: \"kubernetes.io/projected/46e02e5c-1835-4c35-bde3-653fcc7606b5-kube-api-access-qvkrl\") pod \"46e02e5c-1835-4c35-bde3-653fcc7606b5\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.290495 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-catalog-content\") pod \"46e02e5c-1835-4c35-bde3-653fcc7606b5\" (UID: \"46e02e5c-1835-4c35-bde3-653fcc7606b5\") " Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.291929 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-utilities" (OuterVolumeSpecName: "utilities") pod "46e02e5c-1835-4c35-bde3-653fcc7606b5" (UID: "46e02e5c-1835-4c35-bde3-653fcc7606b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.299092 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e02e5c-1835-4c35-bde3-653fcc7606b5-kube-api-access-qvkrl" (OuterVolumeSpecName: "kube-api-access-qvkrl") pod "46e02e5c-1835-4c35-bde3-653fcc7606b5" (UID: "46e02e5c-1835-4c35-bde3-653fcc7606b5"). InnerVolumeSpecName "kube-api-access-qvkrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.307431 4684 scope.go:117] "RemoveContainer" containerID="6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.349117 4684 scope.go:117] "RemoveContainer" containerID="6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9" Jan 21 10:59:33 crc kubenswrapper[4684]: E0121 10:59:33.349670 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9\": container with ID starting with 6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9 not found: ID does not exist" containerID="6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.349717 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9"} err="failed to get container status \"6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9\": rpc error: code = NotFound desc = could not find container \"6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9\": container with ID starting with 6033d2b9293baf20f914b4acfb5734128ff69e02cb472624d1cda6b370ce83e9 not found: ID does not exist" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.349743 4684 scope.go:117] "RemoveContainer" containerID="67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb" Jan 21 10:59:33 crc kubenswrapper[4684]: E0121 10:59:33.350070 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb\": container with ID starting with 67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb not found: ID does not exist" containerID="67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.350109 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb"} err="failed to get container status \"67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb\": rpc error: code = NotFound desc = could not find container \"67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb\": container with ID starting with 67e51c4e2c37107282baaa44ebb001dca26208c3626b17d695fd3c11cc1af0fb not found: ID does not exist" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.350137 4684 scope.go:117] "RemoveContainer" containerID="6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42" Jan 21 10:59:33 crc kubenswrapper[4684]: E0121 10:59:33.350371 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42\": container with ID starting with 6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42 not found: ID does not exist" containerID="6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.350397 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42"} err="failed to get container status \"6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42\": rpc error: code = NotFound desc = could not find container \"6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42\": container with ID starting with 6882cce1eb2a90f3dd48fc06488b35f4b8ba80f30db810f30638d60e33b20f42 not found: ID does not exist" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.392177 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.392234 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvkrl\" (UniqueName: \"kubernetes.io/projected/46e02e5c-1835-4c35-bde3-653fcc7606b5-kube-api-access-qvkrl\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.423626 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46e02e5c-1835-4c35-bde3-653fcc7606b5" (UID: "46e02e5c-1835-4c35-bde3-653fcc7606b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.494300 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e02e5c-1835-4c35-bde3-653fcc7606b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.589930 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9svc"] Jan 21 10:59:33 crc kubenswrapper[4684]: I0121 10:59:33.611681 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9svc"] Jan 21 10:59:34 crc kubenswrapper[4684]: I0121 10:59:34.523905 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" path="/var/lib/kubelet/pods/46e02e5c-1835-4c35-bde3-653fcc7606b5/volumes" Jan 21 10:59:42 crc kubenswrapper[4684]: I0121 10:59:42.524217 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 10:59:45 crc kubenswrapper[4684]: I0121 10:59:45.358476 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"c0b4410078f4455b4ccb8e466819a7d72673aed7cd694f5cc9853c86483d835e"} Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.146899 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t"] Jan 21 11:00:00 crc kubenswrapper[4684]: E0121 11:00:00.147779 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="extract-utilities" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.147798 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="extract-utilities" Jan 21 11:00:00 crc kubenswrapper[4684]: E0121 11:00:00.147857 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="extract-content" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.147868 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="extract-content" Jan 21 11:00:00 crc kubenswrapper[4684]: E0121 11:00:00.147882 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="registry-server" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.147889 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="registry-server" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.150395 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e02e5c-1835-4c35-bde3-653fcc7606b5" containerName="registry-server" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.151895 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.154100 4684 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.154392 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t"] Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.154406 4684 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.345585 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-secret-volume\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.345673 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrfx\" (UniqueName: \"kubernetes.io/projected/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-kube-api-access-8xrfx\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.345748 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-config-volume\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.447130 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-config-volume\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.447551 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-secret-volume\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.447607 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrfx\" (UniqueName: \"kubernetes.io/projected/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-kube-api-access-8xrfx\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.448460 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-config-volume\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.456141 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-secret-volume\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.477830 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrfx\" (UniqueName: \"kubernetes.io/projected/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-kube-api-access-8xrfx\") pod \"collect-profiles-29483220-czb5t\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.772822 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:00 crc kubenswrapper[4684]: I0121 11:00:00.988731 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t"] Jan 21 11:00:01 crc kubenswrapper[4684]: I0121 11:00:01.491983 4684 generic.go:334] "Generic (PLEG): container finished" podID="3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" containerID="1a1f9cfe4f04b924f04255a652bf78454e36023a0ecfbddb1b38c73b658eebb8" exitCode=0 Jan 21 11:00:01 crc kubenswrapper[4684]: I0121 11:00:01.492132 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" event={"ID":"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9","Type":"ContainerDied","Data":"1a1f9cfe4f04b924f04255a652bf78454e36023a0ecfbddb1b38c73b658eebb8"} Jan 21 11:00:01 crc kubenswrapper[4684]: I0121 11:00:01.492443 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" event={"ID":"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9","Type":"ContainerStarted","Data":"b2fa091d5b37b5c8af7639e4b5cde8518bc55f512ea0c0c19ce7ea55bcaf0bd4"} Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.755221 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.885951 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-secret-volume\") pod \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.886044 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-config-volume\") pod \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.886186 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xrfx\" (UniqueName: \"kubernetes.io/projected/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-kube-api-access-8xrfx\") pod \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\" (UID: \"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9\") " Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.887316 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" (UID: "3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.892829 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-kube-api-access-8xrfx" (OuterVolumeSpecName: "kube-api-access-8xrfx") pod "3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" (UID: "3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9"). InnerVolumeSpecName "kube-api-access-8xrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.893663 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" (UID: "3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.987928 4684 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.988017 4684 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:02 crc kubenswrapper[4684]: I0121 11:00:02.988030 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xrfx\" (UniqueName: \"kubernetes.io/projected/3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9-kube-api-access-8xrfx\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:03 crc kubenswrapper[4684]: I0121 11:00:03.508078 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" event={"ID":"3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9","Type":"ContainerDied","Data":"b2fa091d5b37b5c8af7639e4b5cde8518bc55f512ea0c0c19ce7ea55bcaf0bd4"} Jan 21 11:00:03 crc kubenswrapper[4684]: I0121 11:00:03.508692 4684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2fa091d5b37b5c8af7639e4b5cde8518bc55f512ea0c0c19ce7ea55bcaf0bd4" Jan 21 11:00:03 crc kubenswrapper[4684]: I0121 11:00:03.508199 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-czb5t" Jan 21 11:00:03 crc kubenswrapper[4684]: I0121 11:00:03.826292 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2"] Jan 21 11:00:03 crc kubenswrapper[4684]: I0121 11:00:03.833333 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-fkcv2"] Jan 21 11:00:04 crc kubenswrapper[4684]: I0121 11:00:04.527459 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b300558-3052-4644-9616-86a87a9eacbc" path="/var/lib/kubelet/pods/8b300558-3052-4644-9616-86a87a9eacbc/volumes" Jan 21 11:00:07 crc kubenswrapper[4684]: I0121 11:00:07.673684 4684 scope.go:117] "RemoveContainer" containerID="86b2f69531cbe158beb3576156a78673cac28d6252d724c7e49575f70cb7aa32" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.875273 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kj4sx"] Jan 21 11:02:05 crc kubenswrapper[4684]: E0121 11:02:05.879614 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" containerName="collect-profiles" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.879648 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" containerName="collect-profiles" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.879778 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a68c49b-dc53-4bb8-8fd4-6f6096d1dda9" containerName="collect-profiles" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.880947 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.896545 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj4sx"] Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.956759 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45x8\" (UniqueName: \"kubernetes.io/projected/eeaf5d2e-696e-4405-a9b9-29fcb9353086-kube-api-access-z45x8\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.956822 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-utilities\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:05 crc kubenswrapper[4684]: I0121 11:02:05.956932 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-catalog-content\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.058624 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-catalog-content\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.058689 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45x8\" (UniqueName: \"kubernetes.io/projected/eeaf5d2e-696e-4405-a9b9-29fcb9353086-kube-api-access-z45x8\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.058727 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-utilities\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.059202 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-catalog-content\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.059258 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-utilities\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.078736 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45x8\" (UniqueName: \"kubernetes.io/projected/eeaf5d2e-696e-4405-a9b9-29fcb9353086-kube-api-access-z45x8\") pod \"certified-operators-kj4sx\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.197288 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:06 crc kubenswrapper[4684]: I0121 11:02:06.474927 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kj4sx"] Jan 21 11:02:07 crc kubenswrapper[4684]: I0121 11:02:07.302882 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:02:07 crc kubenswrapper[4684]: I0121 11:02:07.303209 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:02:07 crc kubenswrapper[4684]: I0121 11:02:07.453546 4684 generic.go:334] "Generic (PLEG): container finished" podID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerID="ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e" exitCode=0 Jan 21 11:02:07 crc kubenswrapper[4684]: I0121 11:02:07.453589 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj4sx" event={"ID":"eeaf5d2e-696e-4405-a9b9-29fcb9353086","Type":"ContainerDied","Data":"ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e"} Jan 21 11:02:07 crc kubenswrapper[4684]: I0121 11:02:07.453615 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj4sx" event={"ID":"eeaf5d2e-696e-4405-a9b9-29fcb9353086","Type":"ContainerStarted","Data":"453ece6270111fdbb86de2102a9842abfb7dd9301515a32d69b3811338d1e1b2"} Jan 21 11:02:09 crc kubenswrapper[4684]: I0121 11:02:09.469018 4684 generic.go:334] "Generic (PLEG): container finished" podID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerID="f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984" exitCode=0 Jan 21 11:02:09 crc kubenswrapper[4684]: I0121 11:02:09.470884 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj4sx" event={"ID":"eeaf5d2e-696e-4405-a9b9-29fcb9353086","Type":"ContainerDied","Data":"f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984"} Jan 21 11:02:11 crc kubenswrapper[4684]: I0121 11:02:11.486910 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj4sx" event={"ID":"eeaf5d2e-696e-4405-a9b9-29fcb9353086","Type":"ContainerStarted","Data":"becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9"} Jan 21 11:02:11 crc kubenswrapper[4684]: I0121 11:02:11.512853 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kj4sx" podStartSLOduration=3.366952359 podStartE2EDuration="6.512829179s" podCreationTimestamp="2026-01-21 11:02:05 +0000 UTC" firstStartedPulling="2026-01-21 11:02:07.455931099 +0000 UTC m=+3365.214014076" lastFinishedPulling="2026-01-21 11:02:10.601807939 +0000 UTC m=+3368.359890896" observedRunningTime="2026-01-21 11:02:11.50552009 +0000 UTC m=+3369.263603057" watchObservedRunningTime="2026-01-21 11:02:11.512829179 +0000 UTC m=+3369.270912146" Jan 21 11:02:16 crc kubenswrapper[4684]: I0121 11:02:16.197807 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:16 crc kubenswrapper[4684]: I0121 11:02:16.198226 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:16 crc kubenswrapper[4684]: I0121 11:02:16.239947 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:16 crc kubenswrapper[4684]: I0121 11:02:16.572707 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:16 crc kubenswrapper[4684]: I0121 11:02:16.630039 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj4sx"] Jan 21 11:02:18 crc kubenswrapper[4684]: I0121 11:02:18.535122 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kj4sx" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="registry-server" containerID="cri-o://becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9" gracePeriod=2 Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.494014 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.544957 4684 generic.go:334] "Generic (PLEG): container finished" podID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerID="becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9" exitCode=0 Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.545001 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj4sx" event={"ID":"eeaf5d2e-696e-4405-a9b9-29fcb9353086","Type":"ContainerDied","Data":"becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9"} Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.545034 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kj4sx" event={"ID":"eeaf5d2e-696e-4405-a9b9-29fcb9353086","Type":"ContainerDied","Data":"453ece6270111fdbb86de2102a9842abfb7dd9301515a32d69b3811338d1e1b2"} Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.545034 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kj4sx" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.545053 4684 scope.go:117] "RemoveContainer" containerID="becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.562513 4684 scope.go:117] "RemoveContainer" containerID="f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.563961 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z45x8\" (UniqueName: \"kubernetes.io/projected/eeaf5d2e-696e-4405-a9b9-29fcb9353086-kube-api-access-z45x8\") pod \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.564117 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-catalog-content\") pod \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.564212 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-utilities\") pod \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\" (UID: \"eeaf5d2e-696e-4405-a9b9-29fcb9353086\") " Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.565117 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-utilities" (OuterVolumeSpecName: "utilities") pod "eeaf5d2e-696e-4405-a9b9-29fcb9353086" (UID: "eeaf5d2e-696e-4405-a9b9-29fcb9353086"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.571703 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeaf5d2e-696e-4405-a9b9-29fcb9353086-kube-api-access-z45x8" (OuterVolumeSpecName: "kube-api-access-z45x8") pod "eeaf5d2e-696e-4405-a9b9-29fcb9353086" (UID: "eeaf5d2e-696e-4405-a9b9-29fcb9353086"). InnerVolumeSpecName "kube-api-access-z45x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.578918 4684 scope.go:117] "RemoveContainer" containerID="ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.631379 4684 scope.go:117] "RemoveContainer" containerID="becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9" Jan 21 11:02:19 crc kubenswrapper[4684]: E0121 11:02:19.631914 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9\": container with ID starting with becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9 not found: ID does not exist" containerID="becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.631970 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9"} err="failed to get container status \"becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9\": rpc error: code = NotFound desc = could not find container \"becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9\": container with ID starting with becc90e89386d39d6d69c1e678935cfbe8b040c086b5f5e709cbde0f5be904c9 not found: ID does not exist" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.631999 4684 scope.go:117] "RemoveContainer" containerID="f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984" Jan 21 11:02:19 crc kubenswrapper[4684]: E0121 11:02:19.632419 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984\": container with ID starting with f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984 not found: ID does not exist" containerID="f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.632493 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984"} err="failed to get container status \"f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984\": rpc error: code = NotFound desc = could not find container \"f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984\": container with ID starting with f6d89e284fe41cab1cccfe6c189f33f0d2118b9f6714f4ac74ab8e1e29015984 not found: ID does not exist" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.632528 4684 scope.go:117] "RemoveContainer" containerID="ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e" Jan 21 11:02:19 crc kubenswrapper[4684]: E0121 11:02:19.633378 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e\": container with ID starting with ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e not found: ID does not exist" containerID="ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.633426 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e"} err="failed to get container status \"ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e\": rpc error: code = NotFound desc = could not find container \"ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e\": container with ID starting with ce000a262f0a4c46a1c588364628aab42810427933d5b579df2b240b7a6ade4e not found: ID does not exist" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.666166 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.666202 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z45x8\" (UniqueName: \"kubernetes.io/projected/eeaf5d2e-696e-4405-a9b9-29fcb9353086-kube-api-access-z45x8\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.694754 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeaf5d2e-696e-4405-a9b9-29fcb9353086" (UID: "eeaf5d2e-696e-4405-a9b9-29fcb9353086"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.767275 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeaf5d2e-696e-4405-a9b9-29fcb9353086-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.878820 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kj4sx"] Jan 21 11:02:19 crc kubenswrapper[4684]: I0121 11:02:19.886717 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kj4sx"] Jan 21 11:02:20 crc kubenswrapper[4684]: I0121 11:02:20.525265 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" path="/var/lib/kubelet/pods/eeaf5d2e-696e-4405-a9b9-29fcb9353086/volumes" Jan 21 11:02:37 crc kubenswrapper[4684]: I0121 11:02:37.302811 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:02:37 crc kubenswrapper[4684]: I0121 11:02:37.303566 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.302880 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.303614 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.303664 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.304291 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0b4410078f4455b4ccb8e466819a7d72673aed7cd694f5cc9853c86483d835e"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.304344 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://c0b4410078f4455b4ccb8e466819a7d72673aed7cd694f5cc9853c86483d835e" gracePeriod=600 Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.888598 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="c0b4410078f4455b4ccb8e466819a7d72673aed7cd694f5cc9853c86483d835e" exitCode=0 Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.888690 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"c0b4410078f4455b4ccb8e466819a7d72673aed7cd694f5cc9853c86483d835e"} Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.888982 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerStarted","Data":"826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729"} Jan 21 11:03:07 crc kubenswrapper[4684]: I0121 11:03:07.889006 4684 scope.go:117] "RemoveContainer" containerID="e671420303f7cbbe060431f54d1fda2dffb6433ae1876c4b45ffe1094abdc778" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.816794 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-lbbss"] Jan 21 11:04:17 crc kubenswrapper[4684]: E0121 11:04:17.818463 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="extract-utilities" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.818488 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="extract-utilities" Jan 21 11:04:17 crc kubenswrapper[4684]: E0121 11:04:17.818569 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="extract-content" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.818581 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="extract-content" Jan 21 11:04:17 crc kubenswrapper[4684]: E0121 11:04:17.818607 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="registry-server" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.818617 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="registry-server" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.818830 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeaf5d2e-696e-4405-a9b9-29fcb9353086" containerName="registry-server" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.819713 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.830465 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lbbss"] Jan 21 11:04:17 crc kubenswrapper[4684]: I0121 11:04:17.900906 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clscb\" (UniqueName: \"kubernetes.io/projected/a4328488-736f-4b76-8f14-ac3b79b169c8-kube-api-access-clscb\") pod \"infrawatch-operators-lbbss\" (UID: \"a4328488-736f-4b76-8f14-ac3b79b169c8\") " pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:18 crc kubenswrapper[4684]: I0121 11:04:18.002429 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clscb\" (UniqueName: \"kubernetes.io/projected/a4328488-736f-4b76-8f14-ac3b79b169c8-kube-api-access-clscb\") pod \"infrawatch-operators-lbbss\" (UID: \"a4328488-736f-4b76-8f14-ac3b79b169c8\") " pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:18 crc kubenswrapper[4684]: I0121 11:04:18.022833 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clscb\" (UniqueName: \"kubernetes.io/projected/a4328488-736f-4b76-8f14-ac3b79b169c8-kube-api-access-clscb\") pod \"infrawatch-operators-lbbss\" (UID: \"a4328488-736f-4b76-8f14-ac3b79b169c8\") " pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:18 crc kubenswrapper[4684]: I0121 11:04:18.137326 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:18 crc kubenswrapper[4684]: I0121 11:04:18.360466 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lbbss"] Jan 21 11:04:18 crc kubenswrapper[4684]: I0121 11:04:18.381768 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:04:18 crc kubenswrapper[4684]: I0121 11:04:18.396991 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lbbss" event={"ID":"a4328488-736f-4b76-8f14-ac3b79b169c8","Type":"ContainerStarted","Data":"e977904046ea6686edbf1a2b66dfae8fe6e15c42b1d109c0098482869cc04d62"} Jan 21 11:04:20 crc kubenswrapper[4684]: I0121 11:04:20.417268 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lbbss" event={"ID":"a4328488-736f-4b76-8f14-ac3b79b169c8","Type":"ContainerStarted","Data":"8962be2872efb92d9b44794f08f43adb8aef4f05c9c5de317ee35fecf7277f52"} Jan 21 11:04:20 crc kubenswrapper[4684]: I0121 11:04:20.447863 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-lbbss" podStartSLOduration=2.701453191 podStartE2EDuration="3.447826958s" podCreationTimestamp="2026-01-21 11:04:17 +0000 UTC" firstStartedPulling="2026-01-21 11:04:18.381501424 +0000 UTC m=+3496.139584381" lastFinishedPulling="2026-01-21 11:04:19.127875171 +0000 UTC m=+3496.885958148" observedRunningTime="2026-01-21 11:04:20.436073662 +0000 UTC m=+3498.194156629" watchObservedRunningTime="2026-01-21 11:04:20.447826958 +0000 UTC m=+3498.205909925" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.138040 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.138662 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.166281 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.495208 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.788187 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gl4xp"] Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.790984 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.807018 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gl4xp"] Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.861125 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-catalog-content\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.861229 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-utilities\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.861282 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km752\" (UniqueName: \"kubernetes.io/projected/a867587e-be91-4553-99f2-3651a65f907b-kube-api-access-km752\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.962354 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-utilities\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.963119 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km752\" (UniqueName: \"kubernetes.io/projected/a867587e-be91-4553-99f2-3651a65f907b-kube-api-access-km752\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.963642 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-catalog-content\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.963068 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-utilities\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.963972 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-catalog-content\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:28 crc kubenswrapper[4684]: I0121 11:04:28.982854 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km752\" (UniqueName: \"kubernetes.io/projected/a867587e-be91-4553-99f2-3651a65f907b-kube-api-access-km752\") pod \"community-operators-gl4xp\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:29 crc kubenswrapper[4684]: I0121 11:04:29.110784 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:29 crc kubenswrapper[4684]: I0121 11:04:29.368125 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gl4xp"] Jan 21 11:04:29 crc kubenswrapper[4684]: I0121 11:04:29.478705 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4xp" event={"ID":"a867587e-be91-4553-99f2-3651a65f907b","Type":"ContainerStarted","Data":"a45d1f2b259b0d9733f6c227f557140f7ac77b2e54c9c09b845012411241df90"} Jan 21 11:04:30 crc kubenswrapper[4684]: I0121 11:04:30.379049 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lbbss"] Jan 21 11:04:30 crc kubenswrapper[4684]: I0121 11:04:30.485530 4684 generic.go:334] "Generic (PLEG): container finished" podID="a867587e-be91-4553-99f2-3651a65f907b" containerID="832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36" exitCode=0 Jan 21 11:04:30 crc kubenswrapper[4684]: I0121 11:04:30.485735 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-lbbss" podUID="a4328488-736f-4b76-8f14-ac3b79b169c8" containerName="registry-server" containerID="cri-o://8962be2872efb92d9b44794f08f43adb8aef4f05c9c5de317ee35fecf7277f52" gracePeriod=2 Jan 21 11:04:30 crc kubenswrapper[4684]: I0121 11:04:30.485666 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4xp" event={"ID":"a867587e-be91-4553-99f2-3651a65f907b","Type":"ContainerDied","Data":"832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36"} Jan 21 11:04:31 crc kubenswrapper[4684]: I0121 11:04:31.493684 4684 generic.go:334] "Generic (PLEG): container finished" podID="a4328488-736f-4b76-8f14-ac3b79b169c8" containerID="8962be2872efb92d9b44794f08f43adb8aef4f05c9c5de317ee35fecf7277f52" exitCode=0 Jan 21 11:04:31 crc kubenswrapper[4684]: I0121 11:04:31.493752 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lbbss" event={"ID":"a4328488-736f-4b76-8f14-ac3b79b169c8","Type":"ContainerDied","Data":"8962be2872efb92d9b44794f08f43adb8aef4f05c9c5de317ee35fecf7277f52"} Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.165380 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.214085 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clscb\" (UniqueName: \"kubernetes.io/projected/a4328488-736f-4b76-8f14-ac3b79b169c8-kube-api-access-clscb\") pod \"a4328488-736f-4b76-8f14-ac3b79b169c8\" (UID: \"a4328488-736f-4b76-8f14-ac3b79b169c8\") " Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.228999 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4328488-736f-4b76-8f14-ac3b79b169c8-kube-api-access-clscb" (OuterVolumeSpecName: "kube-api-access-clscb") pod "a4328488-736f-4b76-8f14-ac3b79b169c8" (UID: "a4328488-736f-4b76-8f14-ac3b79b169c8"). InnerVolumeSpecName "kube-api-access-clscb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.315485 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clscb\" (UniqueName: \"kubernetes.io/projected/a4328488-736f-4b76-8f14-ac3b79b169c8-kube-api-access-clscb\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.503845 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lbbss" event={"ID":"a4328488-736f-4b76-8f14-ac3b79b169c8","Type":"ContainerDied","Data":"e977904046ea6686edbf1a2b66dfae8fe6e15c42b1d109c0098482869cc04d62"} Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.503897 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lbbss" Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.503939 4684 scope.go:117] "RemoveContainer" containerID="8962be2872efb92d9b44794f08f43adb8aef4f05c9c5de317ee35fecf7277f52" Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.554749 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lbbss"] Jan 21 11:04:32 crc kubenswrapper[4684]: I0121 11:04:32.559391 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-lbbss"] Jan 21 11:04:33 crc kubenswrapper[4684]: I0121 11:04:33.513979 4684 generic.go:334] "Generic (PLEG): container finished" podID="a867587e-be91-4553-99f2-3651a65f907b" containerID="93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf" exitCode=0 Jan 21 11:04:33 crc kubenswrapper[4684]: I0121 11:04:33.514201 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4xp" event={"ID":"a867587e-be91-4553-99f2-3651a65f907b","Type":"ContainerDied","Data":"93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf"} Jan 21 11:04:34 crc kubenswrapper[4684]: I0121 11:04:34.545066 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4328488-736f-4b76-8f14-ac3b79b169c8" path="/var/lib/kubelet/pods/a4328488-736f-4b76-8f14-ac3b79b169c8/volumes" Jan 21 11:04:35 crc kubenswrapper[4684]: I0121 11:04:35.547209 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4xp" event={"ID":"a867587e-be91-4553-99f2-3651a65f907b","Type":"ContainerStarted","Data":"e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7"} Jan 21 11:04:35 crc kubenswrapper[4684]: I0121 11:04:35.562517 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gl4xp" podStartSLOduration=3.314748719 podStartE2EDuration="7.562500778s" podCreationTimestamp="2026-01-21 11:04:28 +0000 UTC" firstStartedPulling="2026-01-21 11:04:30.486981351 +0000 UTC m=+3508.245064318" lastFinishedPulling="2026-01-21 11:04:34.73473342 +0000 UTC m=+3512.492816377" observedRunningTime="2026-01-21 11:04:35.561683691 +0000 UTC m=+3513.319766658" watchObservedRunningTime="2026-01-21 11:04:35.562500778 +0000 UTC m=+3513.320583745" Jan 21 11:04:39 crc kubenswrapper[4684]: I0121 11:04:39.111805 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:39 crc kubenswrapper[4684]: I0121 11:04:39.112202 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:39 crc kubenswrapper[4684]: I0121 11:04:39.152535 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:39 crc kubenswrapper[4684]: I0121 11:04:39.627780 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:39 crc kubenswrapper[4684]: I0121 11:04:39.983878 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gl4xp"] Jan 21 11:04:41 crc kubenswrapper[4684]: I0121 11:04:41.588573 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gl4xp" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="registry-server" containerID="cri-o://e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7" gracePeriod=2 Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.107583 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.198318 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-catalog-content\") pod \"a867587e-be91-4553-99f2-3651a65f907b\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.198524 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km752\" (UniqueName: \"kubernetes.io/projected/a867587e-be91-4553-99f2-3651a65f907b-kube-api-access-km752\") pod \"a867587e-be91-4553-99f2-3651a65f907b\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.198558 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-utilities\") pod \"a867587e-be91-4553-99f2-3651a65f907b\" (UID: \"a867587e-be91-4553-99f2-3651a65f907b\") " Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.199863 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-utilities" (OuterVolumeSpecName: "utilities") pod "a867587e-be91-4553-99f2-3651a65f907b" (UID: "a867587e-be91-4553-99f2-3651a65f907b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.207228 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a867587e-be91-4553-99f2-3651a65f907b-kube-api-access-km752" (OuterVolumeSpecName: "kube-api-access-km752") pod "a867587e-be91-4553-99f2-3651a65f907b" (UID: "a867587e-be91-4553-99f2-3651a65f907b"). InnerVolumeSpecName "kube-api-access-km752". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.257829 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a867587e-be91-4553-99f2-3651a65f907b" (UID: "a867587e-be91-4553-99f2-3651a65f907b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.300172 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km752\" (UniqueName: \"kubernetes.io/projected/a867587e-be91-4553-99f2-3651a65f907b-kube-api-access-km752\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.300204 4684 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.300213 4684 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a867587e-be91-4553-99f2-3651a65f907b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.605047 4684 generic.go:334] "Generic (PLEG): container finished" podID="a867587e-be91-4553-99f2-3651a65f907b" containerID="e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7" exitCode=0 Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.605090 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4xp" event={"ID":"a867587e-be91-4553-99f2-3651a65f907b","Type":"ContainerDied","Data":"e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7"} Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.605117 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4xp" event={"ID":"a867587e-be91-4553-99f2-3651a65f907b","Type":"ContainerDied","Data":"a45d1f2b259b0d9733f6c227f557140f7ac77b2e54c9c09b845012411241df90"} Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.605134 4684 scope.go:117] "RemoveContainer" containerID="e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.605230 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4xp" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.634753 4684 scope.go:117] "RemoveContainer" containerID="93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.652206 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gl4xp"] Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.656177 4684 scope.go:117] "RemoveContainer" containerID="832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.657288 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gl4xp"] Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.677815 4684 scope.go:117] "RemoveContainer" containerID="e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7" Jan 21 11:04:43 crc kubenswrapper[4684]: E0121 11:04:43.678569 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7\": container with ID starting with e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7 not found: ID does not exist" containerID="e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.678608 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7"} err="failed to get container status \"e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7\": rpc error: code = NotFound desc = could not find container \"e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7\": container with ID starting with e6dcc625f10aa141538f7598e1492cce1bac0e668f2ff6c3f7db333999d53aa7 not found: ID does not exist" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.678632 4684 scope.go:117] "RemoveContainer" containerID="93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf" Jan 21 11:04:43 crc kubenswrapper[4684]: E0121 11:04:43.678895 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf\": container with ID starting with 93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf not found: ID does not exist" containerID="93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.678920 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf"} err="failed to get container status \"93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf\": rpc error: code = NotFound desc = could not find container \"93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf\": container with ID starting with 93f9b0e4002b9e6ecdaf5a87812c3748f53ac6be796c3c2896b76978c0008acf not found: ID does not exist" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.678936 4684 scope.go:117] "RemoveContainer" containerID="832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36" Jan 21 11:04:43 crc kubenswrapper[4684]: E0121 11:04:43.679163 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36\": container with ID starting with 832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36 not found: ID does not exist" containerID="832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36" Jan 21 11:04:43 crc kubenswrapper[4684]: I0121 11:04:43.679189 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36"} err="failed to get container status \"832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36\": rpc error: code = NotFound desc = could not find container \"832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36\": container with ID starting with 832267ed222ab275ab31a2927d03d0219de5f69990503c055bdec5afcfb7fd36 not found: ID does not exist" Jan 21 11:04:44 crc kubenswrapper[4684]: I0121 11:04:44.530227 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a867587e-be91-4553-99f2-3651a65f907b" path="/var/lib/kubelet/pods/a867587e-be91-4553-99f2-3651a65f907b/volumes" Jan 21 11:05:07 crc kubenswrapper[4684]: I0121 11:05:07.302162 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:05:07 crc kubenswrapper[4684]: I0121 11:05:07.302817 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:05:37 crc kubenswrapper[4684]: I0121 11:05:37.302485 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:05:37 crc kubenswrapper[4684]: I0121 11:05:37.303535 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:06:07 crc kubenswrapper[4684]: I0121 11:06:07.302316 4684 patch_prober.go:28] interesting pod/machine-config-daemon-sff6s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:06:07 crc kubenswrapper[4684]: I0121 11:06:07.302883 4684 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:06:07 crc kubenswrapper[4684]: I0121 11:06:07.302932 4684 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" Jan 21 11:06:07 crc kubenswrapper[4684]: I0121 11:06:07.303617 4684 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729"} pod="openshift-machine-config-operator/machine-config-daemon-sff6s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:06:07 crc kubenswrapper[4684]: I0121 11:06:07.303669 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerName="machine-config-daemon" containerID="cri-o://826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" gracePeriod=600 Jan 21 11:06:07 crc kubenswrapper[4684]: E0121 11:06:07.448115 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:06:08 crc kubenswrapper[4684]: I0121 11:06:08.221201 4684 generic.go:334] "Generic (PLEG): container finished" podID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" exitCode=0 Jan 21 11:06:08 crc kubenswrapper[4684]: I0121 11:06:08.221254 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" event={"ID":"55d2c484-cf10-46b5-913f-2a033a2ff5c1","Type":"ContainerDied","Data":"826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729"} Jan 21 11:06:08 crc kubenswrapper[4684]: I0121 11:06:08.221299 4684 scope.go:117] "RemoveContainer" containerID="c0b4410078f4455b4ccb8e466819a7d72673aed7cd694f5cc9853c86483d835e" Jan 21 11:06:08 crc kubenswrapper[4684]: I0121 11:06:08.222010 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:06:08 crc kubenswrapper[4684]: E0121 11:06:08.222506 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:06:20 crc kubenswrapper[4684]: I0121 11:06:20.514430 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:06:20 crc kubenswrapper[4684]: E0121 11:06:20.515083 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:06:31 crc kubenswrapper[4684]: I0121 11:06:31.514019 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:06:31 crc kubenswrapper[4684]: E0121 11:06:31.514837 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:06:45 crc kubenswrapper[4684]: I0121 11:06:45.515355 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:06:45 crc kubenswrapper[4684]: E0121 11:06:45.516163 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:06:58 crc kubenswrapper[4684]: I0121 11:06:58.515257 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:06:58 crc kubenswrapper[4684]: E0121 11:06:58.515907 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:07:11 crc kubenswrapper[4684]: I0121 11:07:11.515136 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:07:11 crc kubenswrapper[4684]: E0121 11:07:11.516027 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:07:26 crc kubenswrapper[4684]: I0121 11:07:26.514785 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:07:26 crc kubenswrapper[4684]: E0121 11:07:26.516723 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:07:37 crc kubenswrapper[4684]: I0121 11:07:37.514958 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:07:37 crc kubenswrapper[4684]: E0121 11:07:37.515852 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:07:49 crc kubenswrapper[4684]: I0121 11:07:49.514701 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:07:49 crc kubenswrapper[4684]: E0121 11:07:49.515489 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:08:00 crc kubenswrapper[4684]: I0121 11:08:00.514897 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:08:00 crc kubenswrapper[4684]: E0121 11:08:00.515696 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:08:14 crc kubenswrapper[4684]: I0121 11:08:14.514873 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:08:14 crc kubenswrapper[4684]: E0121 11:08:14.515594 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:08:28 crc kubenswrapper[4684]: I0121 11:08:28.515237 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:08:28 crc kubenswrapper[4684]: E0121 11:08:28.515969 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:08:39 crc kubenswrapper[4684]: I0121 11:08:39.514709 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:08:39 crc kubenswrapper[4684]: E0121 11:08:39.515584 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:08:50 crc kubenswrapper[4684]: I0121 11:08:50.515454 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:08:50 crc kubenswrapper[4684]: E0121 11:08:50.518649 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:09:04 crc kubenswrapper[4684]: I0121 11:09:04.514137 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:09:04 crc kubenswrapper[4684]: E0121 11:09:04.515916 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:09:15 crc kubenswrapper[4684]: I0121 11:09:15.515206 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:09:15 crc kubenswrapper[4684]: E0121 11:09:15.516581 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:09:29 crc kubenswrapper[4684]: I0121 11:09:29.514948 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:09:29 crc kubenswrapper[4684]: E0121 11:09:29.515797 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.131735 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-tkd9g"] Jan 21 11:09:31 crc kubenswrapper[4684]: E0121 11:09:31.132071 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="registry-server" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132087 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="registry-server" Jan 21 11:09:31 crc kubenswrapper[4684]: E0121 11:09:31.132104 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="extract-utilities" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132112 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="extract-utilities" Jan 21 11:09:31 crc kubenswrapper[4684]: E0121 11:09:31.132134 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4328488-736f-4b76-8f14-ac3b79b169c8" containerName="registry-server" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132144 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4328488-736f-4b76-8f14-ac3b79b169c8" containerName="registry-server" Jan 21 11:09:31 crc kubenswrapper[4684]: E0121 11:09:31.132158 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="extract-content" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132165 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="extract-content" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132318 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a867587e-be91-4553-99f2-3651a65f907b" containerName="registry-server" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132340 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4328488-736f-4b76-8f14-ac3b79b169c8" containerName="registry-server" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.132895 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.178075 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tkd9g"] Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.190647 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slv8l\" (UniqueName: \"kubernetes.io/projected/081393b0-241d-481b-8ecd-1cd2a6f9e804-kube-api-access-slv8l\") pod \"infrawatch-operators-tkd9g\" (UID: \"081393b0-241d-481b-8ecd-1cd2a6f9e804\") " pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.292127 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slv8l\" (UniqueName: \"kubernetes.io/projected/081393b0-241d-481b-8ecd-1cd2a6f9e804-kube-api-access-slv8l\") pod \"infrawatch-operators-tkd9g\" (UID: \"081393b0-241d-481b-8ecd-1cd2a6f9e804\") " pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.326096 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slv8l\" (UniqueName: \"kubernetes.io/projected/081393b0-241d-481b-8ecd-1cd2a6f9e804-kube-api-access-slv8l\") pod \"infrawatch-operators-tkd9g\" (UID: \"081393b0-241d-481b-8ecd-1cd2a6f9e804\") " pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.457937 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.859530 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tkd9g"] Jan 21 11:09:31 crc kubenswrapper[4684]: I0121 11:09:31.868390 4684 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:09:32 crc kubenswrapper[4684]: I0121 11:09:32.645903 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tkd9g" event={"ID":"081393b0-241d-481b-8ecd-1cd2a6f9e804","Type":"ContainerStarted","Data":"b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4"} Jan 21 11:09:32 crc kubenswrapper[4684]: I0121 11:09:32.645966 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tkd9g" event={"ID":"081393b0-241d-481b-8ecd-1cd2a6f9e804","Type":"ContainerStarted","Data":"d24ce05a70df015166ebf579342aaca219d3c05e5a50c361b8ef9a20d2b56681"} Jan 21 11:09:32 crc kubenswrapper[4684]: I0121 11:09:32.670311 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-tkd9g" podStartSLOduration=1.188827462 podStartE2EDuration="1.6702838s" podCreationTimestamp="2026-01-21 11:09:31 +0000 UTC" firstStartedPulling="2026-01-21 11:09:31.868177747 +0000 UTC m=+3809.626260714" lastFinishedPulling="2026-01-21 11:09:32.349634075 +0000 UTC m=+3810.107717052" observedRunningTime="2026-01-21 11:09:32.665048187 +0000 UTC m=+3810.423131154" watchObservedRunningTime="2026-01-21 11:09:32.6702838 +0000 UTC m=+3810.428366767" Jan 21 11:09:41 crc kubenswrapper[4684]: I0121 11:09:41.459148 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:41 crc kubenswrapper[4684]: I0121 11:09:41.460594 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:41 crc kubenswrapper[4684]: I0121 11:09:41.487756 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:41 crc kubenswrapper[4684]: I0121 11:09:41.750590 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:41 crc kubenswrapper[4684]: I0121 11:09:41.803275 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-tkd9g"] Jan 21 11:09:43 crc kubenswrapper[4684]: I0121 11:09:43.729136 4684 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-tkd9g" podUID="081393b0-241d-481b-8ecd-1cd2a6f9e804" containerName="registry-server" containerID="cri-o://b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4" gracePeriod=2 Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.104970 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.298650 4684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slv8l\" (UniqueName: \"kubernetes.io/projected/081393b0-241d-481b-8ecd-1cd2a6f9e804-kube-api-access-slv8l\") pod \"081393b0-241d-481b-8ecd-1cd2a6f9e804\" (UID: \"081393b0-241d-481b-8ecd-1cd2a6f9e804\") " Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.304218 4684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/081393b0-241d-481b-8ecd-1cd2a6f9e804-kube-api-access-slv8l" (OuterVolumeSpecName: "kube-api-access-slv8l") pod "081393b0-241d-481b-8ecd-1cd2a6f9e804" (UID: "081393b0-241d-481b-8ecd-1cd2a6f9e804"). InnerVolumeSpecName "kube-api-access-slv8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.400385 4684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slv8l\" (UniqueName: \"kubernetes.io/projected/081393b0-241d-481b-8ecd-1cd2a6f9e804-kube-api-access-slv8l\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.516228 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:09:44 crc kubenswrapper[4684]: E0121 11:09:44.516772 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.736935 4684 generic.go:334] "Generic (PLEG): container finished" podID="081393b0-241d-481b-8ecd-1cd2a6f9e804" containerID="b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4" exitCode=0 Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.736979 4684 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tkd9g" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.736989 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tkd9g" event={"ID":"081393b0-241d-481b-8ecd-1cd2a6f9e804","Type":"ContainerDied","Data":"b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4"} Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.737019 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tkd9g" event={"ID":"081393b0-241d-481b-8ecd-1cd2a6f9e804","Type":"ContainerDied","Data":"d24ce05a70df015166ebf579342aaca219d3c05e5a50c361b8ef9a20d2b56681"} Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.737035 4684 scope.go:117] "RemoveContainer" containerID="b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.762527 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-tkd9g"] Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.763018 4684 scope.go:117] "RemoveContainer" containerID="b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4" Jan 21 11:09:44 crc kubenswrapper[4684]: E0121 11:09:44.763491 4684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4\": container with ID starting with b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4 not found: ID does not exist" containerID="b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.763522 4684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4"} err="failed to get container status \"b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4\": rpc error: code = NotFound desc = could not find container \"b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4\": container with ID starting with b4e83eb9c3b6a9b257e61c8470dbebcef8371faba918a24874a0f08cb429c8c4 not found: ID does not exist" Jan 21 11:09:44 crc kubenswrapper[4684]: I0121 11:09:44.765381 4684 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-tkd9g"] Jan 21 11:09:46 crc kubenswrapper[4684]: I0121 11:09:46.522870 4684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="081393b0-241d-481b-8ecd-1cd2a6f9e804" path="/var/lib/kubelet/pods/081393b0-241d-481b-8ecd-1cd2a6f9e804/volumes" Jan 21 11:09:55 crc kubenswrapper[4684]: I0121 11:09:55.515067 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:09:55 crc kubenswrapper[4684]: E0121 11:09:55.515744 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.038805 4684 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k8fdc"] Jan 21 11:09:59 crc kubenswrapper[4684]: E0121 11:09:59.040012 4684 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="081393b0-241d-481b-8ecd-1cd2a6f9e804" containerName="registry-server" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.040027 4684 state_mem.go:107] "Deleted CPUSet assignment" podUID="081393b0-241d-481b-8ecd-1cd2a6f9e804" containerName="registry-server" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.040202 4684 memory_manager.go:354] "RemoveStaleState removing state" podUID="081393b0-241d-481b-8ecd-1cd2a6f9e804" containerName="registry-server" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.041284 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.059146 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8fdc"] Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.229287 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29267927-5d76-45ba-bd19-898172391030-utilities\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.229704 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29267927-5d76-45ba-bd19-898172391030-catalog-content\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.229845 4684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtn7\" (UniqueName: \"kubernetes.io/projected/29267927-5d76-45ba-bd19-898172391030-kube-api-access-4wtn7\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.330986 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29267927-5d76-45ba-bd19-898172391030-catalog-content\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.331053 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wtn7\" (UniqueName: \"kubernetes.io/projected/29267927-5d76-45ba-bd19-898172391030-kube-api-access-4wtn7\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.331129 4684 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29267927-5d76-45ba-bd19-898172391030-utilities\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.331662 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29267927-5d76-45ba-bd19-898172391030-catalog-content\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.331699 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29267927-5d76-45ba-bd19-898172391030-utilities\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.351616 4684 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wtn7\" (UniqueName: \"kubernetes.io/projected/29267927-5d76-45ba-bd19-898172391030-kube-api-access-4wtn7\") pod \"redhat-operators-k8fdc\" (UID: \"29267927-5d76-45ba-bd19-898172391030\") " pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.362059 4684 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.580078 4684 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k8fdc"] Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.867210 4684 generic.go:334] "Generic (PLEG): container finished" podID="29267927-5d76-45ba-bd19-898172391030" containerID="6815ca16225a6fed72d9693aa5930610e642fd37d9f6101d15d626340ae0a4ff" exitCode=0 Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.867263 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8fdc" event={"ID":"29267927-5d76-45ba-bd19-898172391030","Type":"ContainerDied","Data":"6815ca16225a6fed72d9693aa5930610e642fd37d9f6101d15d626340ae0a4ff"} Jan 21 11:09:59 crc kubenswrapper[4684]: I0121 11:09:59.867493 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8fdc" event={"ID":"29267927-5d76-45ba-bd19-898172391030","Type":"ContainerStarted","Data":"b11e2a5751e2e68c41e9e10533dfcc51c49fd95d7cbfdd8bfdc9a8837a353610"} Jan 21 11:10:01 crc kubenswrapper[4684]: I0121 11:10:01.881040 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8fdc" event={"ID":"29267927-5d76-45ba-bd19-898172391030","Type":"ContainerStarted","Data":"56ec828b5026babc33ad5a734e1209761550093bfbd418052ed56d0b9ff3722b"} Jan 21 11:10:03 crc kubenswrapper[4684]: I0121 11:10:03.896272 4684 generic.go:334] "Generic (PLEG): container finished" podID="29267927-5d76-45ba-bd19-898172391030" containerID="56ec828b5026babc33ad5a734e1209761550093bfbd418052ed56d0b9ff3722b" exitCode=0 Jan 21 11:10:03 crc kubenswrapper[4684]: I0121 11:10:03.896377 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8fdc" event={"ID":"29267927-5d76-45ba-bd19-898172391030","Type":"ContainerDied","Data":"56ec828b5026babc33ad5a734e1209761550093bfbd418052ed56d0b9ff3722b"} Jan 21 11:10:07 crc kubenswrapper[4684]: I0121 11:10:07.930992 4684 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k8fdc" event={"ID":"29267927-5d76-45ba-bd19-898172391030","Type":"ContainerStarted","Data":"52f1b679e39f1ca29fc1fbe56302c42ab2cb977a0e5a076359ea6120146bcf83"} Jan 21 11:10:07 crc kubenswrapper[4684]: I0121 11:10:07.960056 4684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k8fdc" podStartSLOduration=1.854438636 podStartE2EDuration="8.960030215s" podCreationTimestamp="2026-01-21 11:09:59 +0000 UTC" firstStartedPulling="2026-01-21 11:09:59.86873577 +0000 UTC m=+3837.626818737" lastFinishedPulling="2026-01-21 11:10:06.974327349 +0000 UTC m=+3844.732410316" observedRunningTime="2026-01-21 11:10:07.952086854 +0000 UTC m=+3845.710169841" watchObservedRunningTime="2026-01-21 11:10:07.960030215 +0000 UTC m=+3845.718113182" Jan 21 11:10:09 crc kubenswrapper[4684]: I0121 11:10:09.363199 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:10:09 crc kubenswrapper[4684]: I0121 11:10:09.363662 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:10:09 crc kubenswrapper[4684]: I0121 11:10:09.514119 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:10:09 crc kubenswrapper[4684]: E0121 11:10:09.514392 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1" Jan 21 11:10:10 crc kubenswrapper[4684]: I0121 11:10:10.400836 4684 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k8fdc" podUID="29267927-5d76-45ba-bd19-898172391030" containerName="registry-server" probeResult="failure" output=< Jan 21 11:10:10 crc kubenswrapper[4684]: timeout: failed to connect service ":50051" within 1s Jan 21 11:10:10 crc kubenswrapper[4684]: > Jan 21 11:10:19 crc kubenswrapper[4684]: I0121 11:10:19.405631 4684 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:10:19 crc kubenswrapper[4684]: I0121 11:10:19.451439 4684 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k8fdc" Jan 21 11:10:19 crc kubenswrapper[4684]: I0121 11:10:19.642735 4684 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k8fdc"] Jan 21 11:10:20 crc kubenswrapper[4684]: I0121 11:10:20.514150 4684 scope.go:117] "RemoveContainer" containerID="826b0cf2d22ab202cb249edd094e0482f445efa82dc0dc6facfdb00f9fc9e729" Jan 21 11:10:20 crc kubenswrapper[4684]: E0121 11:10:20.514631 4684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sff6s_openshift-machine-config-operator(55d2c484-cf10-46b5-913f-2a033a2ff5c1)\"" pod="openshift-machine-config-operator/machine-config-daemon-sff6s" podUID="55d2c484-cf10-46b5-913f-2a033a2ff5c1"